Topic 1 - Exam A
Topic 1
Question #1
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data
that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must
minimize operational complexity.
Which solution meets these requirements?
A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3
bucket.
B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3
bucket. Then remove the data from the origin S3 bucket.
C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-
Region Replication to copy objects to the destination S3 bucket.
D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon
EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS
volume in that Region.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: A
S3 Transfer Acceleration is the best solution cz it's faster , good for high speed, Transfer Acceleration is designed to optimize transfer speeds from
across the world into S3 buckets.
upvoted 33 times
8 months, 2 weeks ago
I thought S3 Transfer Acceleration is based on Cross Region Repilication, I made a mistake.
upvoted 1 times
Highly Voted
2 months, 3 weeks ago
Thank you ExamTopics!!! I am so happy, today 06/04/2023 I pass the exam with 793.
upvoted 11 times
2 months, 3 weeks ago
is it enough to study the first 20 pages which are free?
upvoted 2 times
2 months, 3 weeks ago
NOPE NOPE
upvoted 1 times
Most Recent
1 week, 2 days ago
Selected Answer: A
With Amazon S3 Transfer Acceleration, you can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance
transfer of larger objects.
upvoted 1 times
1 week, 2 days ago
Selected Answer: A
3 Transfer Acceleration is the best solution cz it's faster , good for high speed, Transfer Acceleration is designed to optimize transfer speeds from
across the world into S3 buckets.
upvoted 1 times
1 week, 4 days ago
Selected Answer: A
To aggregate data from multiple global sites as quickly as possible in a single Amazon S3 bucket while minimizing operational complexity, the most
suitable solution would be Option A: Turn on S3 Transfer Acceleration on the destination S3 bucket and use multipart uploads to directly upload
Community vote distribution
A (95%)
5%
site data to the destination S3 bucket.
In summary, Option A provides the most efficient and operationally simple solution to aggregate data from multiple global sites quickly into a
single Amazon S3 bucket. By leveraging S3 Transfer Acceleration and multipart uploads, the company can achieve rapid data ingestion while
minimizing complexity.
upvoted 1 times
2 weeks ago
I believe it is very confusing practice . Should we rely on Answers given against questions or Most voted answers ? Exam topics should not give
answers if most of the answers are wrong.
upvoted 1 times
2 weeks, 5 days ago
Passed this week with a score of 830. Style is identical to these questions - learn all the questions here and you will do well.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: A
Option A fulfills the requirements.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your
client and an S3 bucket.
https://aws.amazon.com/s3/transfer-acceleration/
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
Answer is A
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Keyword:
From GLOBAL sites as quickly as possible in a SINGLE S3 bucket.
Minimize operational complexity
A. is correct because S3 Transfer Acceleration is support for high speed transfer in Edge location and you can upload it immediately. Also with
multipart uploads your big file can be uploaded in parallel.
B, C, D. is not minimize operational and fast when compare to answer A
upvoted 6 times
3 months ago
Selected Answer: A
Option A proposes using S3 Transfer Acceleration to speed up the data transfer to the destination S3 bucket. This service uses Amazon
CloudFront's globally distributed edge locations to accelerate transfers over the public internet. This would help to reduce the time it takes to
transfer data from each site to the destination S3 bucket.
upvoted 2 times
3 months ago
In addition, using multipart uploads would allow data to be uploaded in parts, which would reduce the impact of network latency and increase
overall throughput. This would help to further speed up the data transfer.
upvoted 1 times
3 months ago
Selected Answer: A
A is the simplest and most efficient solution for aggregating data from multiple global sites in a single Amazon S3 bucket.
upvoted 1 times
3 months ago
Selected Answer: A
A is best answer
upvoted 1 times
3 months ago
Selected Answer: A
Best answer is A
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
Best answer is A
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
Answer is A.
upvoted 1 times
Topic 1
Question #2
A company needs the ability to analyze the log les of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket.
Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing
architecture.
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed.
D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Answer: C
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using
standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using
standard SQL to run ad-hoc queries and get results in seconds.
upvoted 33 times
8 months, 2 weeks ago
I agree C is the answer
upvoted 1 times
8 months, 3 weeks ago
C is right.
upvoted 1 times
Highly Voted
2 months, 3 weeks ago
Selected Answer: C
Keyword:
- Queries will be simple and will run on-demand.
- Minimal changes to the existing architecture.
A: Incorrect - We have to do 2 step. load all content to Redshift and run SQL query (This is simple query so we can you Athena, for complex query
we will apply Redshit)
B: Incorrect - Our query will be run on-demand so we don't need to use CloudWatch Logs to store the logs.
C: Correct - This is simple query we can apply Athena directly on S3
D: Incorrect - This take 2 step: use AWS Glue to catalog the logs and use Spark to run SQL query
upvoted 6 times
Most Recent
2 days, 19 hours ago
Selected Answer: C
C is right
upvoted 1 times
1 week, 2 days ago
Selected Answer: C
I agree C
upvoted 1 times
1 week, 2 days ago
Selected Answer: C
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
upvoted 1 times
1 week, 4 days ago
Selected Answer: C
To meet the requirements of analyzing log files stored in JSON format in an Amazon S3 bucket with minimal changes to the existing architecture
and minimal operational overhead, the most suitable option would be Option C: Use Amazon Athena directly with Amazon S3 to run the queries as
needed.
Amazon Athena is a serverless interactive query service that allows you to analyze data directly from Amazon S3 using standard SQL queries. It
Community vote distribution
C (100%)
eliminates the need for infrastructure provisioning or data loading, making it a low-overhead solution.
Overall, Amazon Athena offers a straightforward and efficient solution for analyzing log files stored in JSON format, ensuring minimal operational
overhead and compatibility with simple on-demand queries.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: C
C is answer.
upvoted 1 times
1 month, 2 weeks ago
C is correct
upvoted 1 times
2 months ago
Selected Answer: C
serverless to avoid operational overhead c is the answer
upvoted 2 times
2 months, 1 week ago
It is difficult to use SQL to query JSON format files, which contradicts the simple query mentioned in the question and excludes all SQL options
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: C
Es el C
sin dudas, las opciones A y D requieren una infraestructura de base de datos separada, lo que puede aumentar los costos operativos. La opción B
no es adecuada para este escenario ya que Amazon CloudWatch Logs no admite consultas SQL directamente y puede requerir una transformación
adicional de los datos antes de que se puedan analizar.
upvoted 1 times
3 months ago
Selected Answer: C
Option C proposes using Amazon Athena directly with Amazon S3 to run queries as needed. This would allow for simple on-demand queries
without any additional infrastructure setup or maintenance. Athena is designed for querying data stored in S3 using SQL statements and can
handle a variety of file formats, including JSON. Athena also provides a serverless solution with no infrastructure to manage, allowing the solutions
architect to focus on the data analysis instead of the infrastructure.
upvoted 2 times
3 months ago
Selected Answer: C
Option C is the simplest and most efficient solution for analyzing log files stored in JSON format in an Amazon S3 bucket with minimal changes to
the existing architecture.
upvoted 1 times
3 months ago
Selected Answer: C
i choose C
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
Athena is a good choice.
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
Answer is C.
upvoted 1 times
3 months, 1 week ago
C is the correct option.
upvoted 2 times
Topic 1
Question #3
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3
bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in
AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization
events. Update the S3 bucket policy accordingly.
D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: A
aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization.
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/
upvoted 36 times
8 months, 2 weeks ago
the condition key aws:PrincipalOrgID can prevent the members who don't belong to your organization to access the resource
upvoted 9 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_overview.html
Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions.
The following condition keys are especially useful with AWS Organizations:
aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the
account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the
organization ID in the Condition element.
aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths
condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text
representation of the structure of an AWS Organizations entity.
upvoted 9 times
Most Recent
1 week, 4 days ago
Selected Answer: A
Option A, which suggests adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy, is a
valid solution to limit access to the S3 bucket to users within the organization in AWS Organizations. It can effectively achieve the desired access
control.
It restricts access to the S3 bucket based on the organization ID, ensuring that only users within the organization can access the bucket. This
method is suitable if you want to restrict access at the organization level rather than individual departments or organizational units.
The operational overhead for Option A is also relatively low since it involves adding a global condition key to the S3 bucket policy. However, it is
important to note that the organization ID must be accurately configured in the bucket policy to ensure the desired access control is enforced.
In summary, Option A is a valid solution with minimal operational overhead that can limit access to the S3 bucket to users within the organization
using the aws PrincipalOrgID global condition key.
upvoted 1 times
1 week, 5 days ago
A is the correct answer.
upvoted 1 times
2 months, 2 weeks ago
You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from
accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID, read the
IAM documentation.
upvoted 1 times
Community vote distribution
A (94%)
6%
2 months, 3 weeks ago
Selected Answer: A
Keywords:
- Company uses AWS Organizations
- Limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations
- LEAST amount of operational overhead
A: Correct - We just add PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy
B: Incorrect - We can limit access by this way but this will take more amount of operational overhead
C: Incorrect - AWS CloudTrail only log API events, we can not prevent user access to S3 bucket. For update S3 bucket policy to make it work you
should manually add each account -> this way will not be cover in case of new user is added to Organization.
D: Incorrect - We can limit access by this way but this will take most amount of operational overhead
upvoted 5 times
3 months ago
Selected Answer: A
Option A proposes adding the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. This would
limit access to the S3 bucket to only users of accounts within the organization in AWS Organizations, as the aws PrincipalOrgID condition key can
check if the request is coming from within the organization.
upvoted 2 times
3 months ago
B. Create an organizational unit (OU) for each department. Add the AWS: Principal Org Paths global condition key to the S3 bucket policy. This
solution allows for the S3 bucket to only be accessed by users within the organization in AWS Organizations while minimizing operational overhead
by organizing users into OUs and using a single global condition key in the bucket policy. Option A, adding the Principal ID global condition key,
would require frequent updates to the policy as new users are added or removed from the organization. Option C, using CloudTrail to monitor
events, would require manual updating of the policy based on the events. Option D, tagging each user, would also require manual tagging updates
and may not be scalable for larger organizations with many users.
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
Answer is A.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
This is the least operationally overhead solution because it requires only a single configuration change to the S3 bucket policy, which will allow
access to the bucket for all users within the organization. The other options require ongoing management and maintenance. Option B requires the
creation and maintenance of organizational units for each department. Option C requires monitoring of specific CloudTrail events and updates to
the S3 bucket policy based on those events. Option D requires the creation and maintenance of tags for each user that needs access to the bucket.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Answered by ChatGPT with an explanation.
The correct solution that meets these requirements with the least amount of operational overhead is Option A: Add the aws PrincipalOrgID global
condition key with a reference to the organization ID to the S3 bucket policy.
Option A involves adding the aws:PrincipalOrgID global condition key to the S3 bucket policy, which allows you to specify the organization ID of
the accounts that you want to grant access to the bucket. By adding this condition to the policy, you can limit access to the bucket to only users of
accounts within the organization.
upvoted 4 times
6 months, 1 week ago
Option B involves creating organizational units (OUs) for each department and adding the aws:PrincipalOrgPaths global condition key to the S3
bucket policy. This option would require more operational overhead, as it involves creating and managing OUs for each department.
Option C involves using AWS CloudTrail to monitor certain events and updating the S3 bucket policy accordingly. While this option could
potentially work, it would require ongoing monitoring and updates to the policy, which could increase operational overhead.
upvoted 2 times
6 months, 1 week ago
Option D involves tagging each user that needs access to the S3 bucket and adding the aws:PrincipalTag global condition key to the S3
bucket policy. This option would require you to tag each user, which could be time-consuming and could increase operational overhead.
Overall, Option A is the most straightforward and least operationally complex solution for limiting access to the S3 bucket to only users of
accounts within the organization.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account (including the
master account) in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict access to only principals
from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID condition and set the value to your
organization ID in the bucket policy. Your organization ID is what sets the access control on the S3 bucket. Additionally, when you use this
condition, policy permissions apply when you add new accounts to this organization without requiring an update to the policy.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: A
aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the
account IDs for all AWS accounts in an organization.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
I think that LEAST is the key. So A!
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: A
A is the correct answer
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #4
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2
instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
A. Create a gateway VPC endpoint to the S3 bucket.
B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
C. Create an instance pro le on Amazon EC2 to allow S3 access.
D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: A
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet
upvoted 23 times
Highly Voted
2 months, 3 weeks ago
Selected Answer: A
Keywords:
- EC2 in VPC
- EC2 instance needs to access the S3 bucket without connectivity to the internet
A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost
B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from CloudWatch to S3
bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3
C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately
D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic Load
Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any publicly available
HTTPS-based endpoint. But not S3
upvoted 11 times
Most Recent
1 week, 4 days ago
Selected Answer: A
Here's why Option A is the correct choice:
Gateway VPC Endpoint: A gateway VPC endpoint allows you to privately connect your VPC to supported AWS services. By creating a gateway VPC
endpoint for S3, you can establish a private connection between your VPC and the S3 service without requiring internet connectivity.
Private network connectivity: The gateway VPC endpoint for S3 enables your EC2 instance within the VPC to access the S3 bucket over the private
network, ensuring secure and direct communication between the EC2 instance and S3.
No internet connectivity required: Since the requirement is to access the S3 bucket without internet connectivity, the gateway VPC endpoint
provides a private and direct connection to S3 without needing to route traffic through the internet.
Minimal operational complexity: Setting up a gateway VPC endpoint is a straightforward process. It involves creating the endpoint and configuring
the appropriate routing in the VPC. This solution minimizes operational complexity while providing the required private network connectivity.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: A
A is right answer.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Option B) not provide private network connectivity to S3.
Option C) not provide private network connectivity to S3.
Option D) API Gateway with a private link provide private network connectivity between a VPC and an HTTP(S) endpoint, not S3.
Community vote distribution
A (100%)
upvoted 1 times
3 months ago
Selected Answer: A
Option A proposes creating a VPC endpoint for Amazon S3. A VPC endpoint enables private connectivity between the VPC and S3 without using an
internet gateway or NAT device. This would provide the EC2 instance with private network connectivity to the S3 bucket.
upvoted 2 times
3 months ago
Could someone send me a pdf of this dump please? Thank you so much in advance!
upvoted 1 times
3 months ago
Can anyone please send me the pdf of this whole dump... i can be very grateful. thanks in advance.
email- subhajeet.pal08@gmail.com
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
A bạn ơi :)
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
Answer is A, but was confused with C, instance role will route through internet.
upvoted 1 times
4 months ago
A VPC endpoint allows you to connect from the VPC to other AWS services outside of the VPC without the use of the internet.
upvoted 1 times
4 months ago
Selected Answer: A
VPC endpoint enables creation of a private connection between VPC to supported AWS services and VPC endpoint services powered by PrivateLink
using its private IP address. Traffic between VPC and AWS service does not leave the Amazon network.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
A is correct, VPC endpoint is a connection between your VPC and an AWS
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet
upvoted 1 times
5 months, 2 weeks ago
A is correct
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
A gateway VPC endpoint is a connection between your VPC and an AWS service that enables private connectivity to the service. A gateway VPC
endpoint for S3 allows the EC2 instance to access the S3 bucket without requiring internet connectivity.
upvoted 3 times
Topic 1
Question #5
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS
volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in
another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they
refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
A. Copy the data so both EBS volumes contain all the documents
B. Con gure the Application Load Balancer to direct a user to the server with the documents
C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
D. Con gure the Application Load Balancer to send the request to both servers. Return each document from the correct server
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Selected Answer: C
Concurrent or at the same time key word for EFS
upvoted 22 times
Highly Voted
7 months, 2 weeks ago
Ebs doesnt support cross az only reside in one Az but Efs does, that why it's c
upvoted 15 times
1 month, 3 weeks ago
And just for clarification to others, you can have COPIES of the same EBS volume in one AZ and in another via EBS Snapshots, but don't confuse
that with the idea of having some sort of global capability that has concurrent copying mechanisms.
upvoted 2 times
Most Recent
1 week, 4 days ago
Selected Answer: C
To ensure users can see all their documents at once in the duplicated architecture with multiple EC2 instances and EBS volumes behind an
Application Load Balancer, the most appropriate solution is Option C: Copy the data from both EBS volumes to Amazon EFS (Elastic File System)
and modify the application to save new documents to Amazon EFS.
In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document storage, is
the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS provides scalability,
availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly.
upvoted 2 times
1 week, 5 days ago
ffkfkffkfkf
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: C
C is right answer.
upvoted 1 times
1 month ago
Selected Answer: C
C because the other options don't put all the data in one place.
upvoted 1 times
1 month, 1 week ago
Option C is the best answer, option D is pretty vague. All other options are obviously wrong.
upvoted 1 times
1 month, 1 week ago
The answer is B as it is aligned to min. b/w usage and also the time taken is 6-7 days which is about the same as transferring over the internet of
1G as per option C.
upvoted 1 times
Community vote distribution
C (100%)
1 month, 2 weeks ago
Selected Answer: C
C is correct
upvoted 1 times
2 months ago
If there is anyone who is willing to share his/her contributor access, then please write to vinaychethi99@gmail.com
upvoted 2 times
2 months, 1 week ago
Option A is not a good solution because copying data to both volumes would not ensure consistency of the data.
Option B would require the Load Balancer to have knowledge of which documents are stored on which server, which would be difficult to maintain.
Option C is a viable solution, but may require modifying the application to use Amazon EFS instead of EBS.
Option D is a good solution because it would distribute the requests to both servers and return the correct document from the correct server. This
can be achieved by configuring session stickiness on the Load Balancer so that each user's requests are directed to the same server for consistency.
Therefore, the correct answer is D.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: C
Keyword:
second EC2 instance and EBS volume. They could see one subset of their documents or the other, but never all of the documents at the same time.
EBS: attached to one instance (special EBS io1, io2 can attached to multiple instances but not much)
EFS: can attached to multiple instances
A: Incorrect - EBS volumes don't have function to copy data from running EBS volume to running EBS volume.
B: Incorrect - We can use sticky session to forward same user to the same server but when user lose the session the user might be forward to
another server.
C: Correct - Because 2 instance now point to one EFS data storage, user will see both data.
D: Incorrect - We only use Traffic Mirroring to sent request to both servers. Application Load Balancer don't support send request to both servers
because it's design it balance workload between server. And also ALB cannot combine document from both servers and return.
upvoted 9 times
3 months ago
Selected Answer: C
Option C proposes copying the data from both EBS volumes to Amazon EFS and modifying the application to save new documents to EFS. This
would ensure that all documents are accessible from both servers as EFS is a shared file storage service that can be mounted on multiple instances
simultaneously. Additionally, modifying the application to save new documents to EFS would ensure that any new documents are available on both
servers.
upvoted 1 times
3 months ago
Selected Answer: C
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
EBS AZ locked bạn ơi :)
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
Answer is C.
upvoted 1 times
4 months ago
Selected Answer: C
EFS automatically scales as users upload and delete files. EBS volumes can scale vertically by reconfiguring volume types and horizontally by
managing additional EC2 volumes.
upvoted 1 times
Topic 1
Question #6
A company uses NFS to store large video les in on-premises network attached storage. Each video le ranges in size from 1 MB to 500 GB. The
total storage is 70 TB and is no longer growing. The company decides to migrate the video les to Amazon S3. The company must migrate the
video les as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all les locally to the S3
bucket.
B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the
device. Return the device so that AWS can import the data into Amazon S3.
C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a
new NFS le share on the S3 File Gateway. Point the new le share to the S3 bucket. Transfer the data from the existing NFS le share to the
S3 File Gateway.
D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a
public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS le share on the S3 File Gateway. Point
the new le share to the S3 bucket. Transfer the data from the existing NFS le share to the S3 File Gateway.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Let's analyse this:
B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less than 2
hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it back and for
AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0.
C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of Internet
bandwidth.
D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's
interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you will have a
dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public" bandwidth.
The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth usage
refers strictly to your public connectivity.
upvoted 42 times
2 months, 3 weeks ago
This calculation is out of the scope.
C is right because the company wants to use the LEAST POSSIBLE NETWORK BANDWITH. Therefore they don't want or can't use the snowball
capabilities of having a such fast connection because it draws too much bandwith within their company.
upvoted 5 times
3 weeks, 3 days ago
yeah first company use NFS file to store the data right then the company want to move to S3. with endpoint we dont need public
connectivity
upvoted 1 times
1 month ago
NFS is using bandwidth within their company, so that logic does not apply.
upvoted 1 times
2 months, 1 week ago
you are out of scope
upvoted 5 times
4 months, 1 week ago
D is a viable solution but to setup D it can take weeks or months and the question does say as soon as possible.
upvoted 3 times
4 months, 1 week ago
Community vote distribution
B (85%)
Other
Time Calc Clarification:
Data: 70TB
=70TB*8b/B=560Tb
=560Tb*1000G/1T=560000Gb
Speed: 100Gb/s
Time=Data:Speed=56000Gb:100Gb/s=5600s
Time=5600s:3600s/hour=~1.5 hours (in case always on max speed)
upvoted 2 times
5 months, 2 weeks ago
But it said "as soon as possible" It takes about 4-6 weeks to provision a direct connect.
upvoted 8 times
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
As using the least possible network bandwidth.
upvoted 27 times
Most Recent
1 week, 3 days ago
Selected Answer: B
Option A: It would require transferring the data over the network, which could consume a significant amount of bandwidth. This option does not
address the requirement of minimizing network bandwidth usage.
Option C: It would still involve network transfers, potentially utilizing a significant amount of bandwidth.
Option D: It would also involve network transfers. Although it provides a dedicated network connection, it doesn't address the requirement of
minimizing network bandwidth usage.
Option B: It is the most suitable solution in this scenario. With Snowball Edge, a physical device is sent to the company's premises. The large video
files can be directly transferred to the Snowball Edge device, bypassing the need for significant network transfers. Once the data is transferred to
the device, it can be returned to AWS, where AWS will import the data into Amazon S3. This approach minimizes network bandwidth usage by
using physical transfer rather than relying solely on network transfers.
upvoted 1 times
1 week, 5 days ago
Selected Answer: B
import video files with the least network bandwidth.
upvoted 1 times
3 weeks, 3 days ago
i think C is the correct answer because first company use the NFS to storage data, and then company want to relocate to S3 right. So i think C is the
best solution
upvoted 1 times
3 weeks, 4 days ago
Selected Answer: B
https://docs.aws.amazon.com/snowball/latest/developer-guide/jobs.html
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: B
B isri ght answer.
upvoted 1 times
1 month ago
C is correct , In the question he asked clearly " while using the least possible network bandwidth " and one of AWS Snowball Edge features is
Network adapters with transfer speeds of up to 100 Gbit/second URL : https://docs.aws.amazon.com/snowball/latest/developer-
guide/whatisedge.html#edge-feature-overview so it can't B
upvoted 2 times
1 month ago
https://docs.aws.amazon.com/filegateway/latest/files3/MaintenanceUpdateBandwidth-common.html => C
upvoted 1 times
1 month ago
Selected Answer: B
I'll just have to get this one wrong on the exam.
upvoted 1 times
1 month ago
Selected Answer: B
B is the most reasonable, because the data is no longer growing, and the transfer is a one-off job.
upvoted 1 times
1 month, 1 week ago
Obviously the solution is B we have to understand the exam is based on real life situations. In that’s e situations everything is not black or white
there are nuances. . You are to choose the most effective and balance solution that does not involves extraneous processes. The salient info here is
70 tb , the file is not growing . U don’t want too much band with u don’t want too long time but u don’t need too much speed too. U don’t want to
cache the files or work hybrid with it. The docs says. Snowball Edge devices have approximately 39.5 TB or 80 TB of usable space. For example, if
you want to move 300 TB of data to AWS over 10 days and you have a transfer speed of 250 MB/s, you need four Snowball Edge devices. So one
device can do the 70 tb also the files can be moved in segments smaller files can be batched up and the largest file is in the range that the device
can handle.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
Snowball Edge is the correct one, though there is no bandwidth is used. Because it is a migration and not the hybrid environment to use File
Gateway.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
B will use the least possible network bandwidth
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: C
Se olvidan del tiempo de entrega de Snowball y tambien de que no esta disponible en todas las regiones.
upvoted 2 times
2 months, 2 weeks ago
Option B:
Using an AWS Snowball Edge device to transfer data is a low-bandwidth solution that enables the transfer of large amounts of data over the
internet without consuming significant network bandwidth. By using a Snowball Edge device, the company can minimize the network bandwidth
usage and migrate the video files to Amazon S3 as quickly as possible.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: B
The question states "LEAST POSSIBLE NETWORK BANDWITH" and the Data size is not growing than 70TB which means having a Storage File
Gateway makes no sense as its not a hybrid requirement any more.
upvoted 1 times
Topic 1
Question #7
A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these
messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to
decouple the solution and increase scalability.
Which solution meets these requirements?
A. Persist the messages to Amazon Kinesis Data Analytics. Con gure the consumer applications to read and process the messages.
B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU
metrics.
C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store
them in Amazon DynamoDB. Con gure the consumer applications to read from DynamoDB to process the messages.
D. Publish the messages to an Amazon Simple Noti cation Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon
SOS) subscriptions. Con gure the consumer applications to process the messages from the queues.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: D
D makes more sense to me.
upvoted 33 times
5 months, 3 weeks ago
By default, an SQS queue can handle a maximum of 3,000 messages per second. However, you can request higher throughput by contacting
AWS Support. AWS can increase the message throughput for your queue beyond the default limits in increments of 300 messages per second,
up to a maximum of 10,000 messages per second.
It's important to note that the maximum number of messages per second that a queue can handle is not the same as the maximum number of
requests per second that the SQS API can handle. The SQS API is designed to handle a high volume of requests per second, so it can be used to
send messages to your queue at a rate that exceeds the maximum message throughput of the queue.
upvoted 5 times
5 months, 2 weeks ago
The limit that you're mentioning apply to FIFO queues. Standard queues are unlimited in throughput
(https://aws.amazon.com/sqs/features/). Do you think that the use case require FIFO queue ?
upvoted 9 times
4 months, 3 weeks ago
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon
SQS) subscriptions. Configure the consumer applications to process the messages from the queues.
This solution uses Amazon SNS and SQS to publish and subscribe to messages respectively, which decouples the system and enables scalability
by allowing multiple consumer applications to process the messages in parallel. Additionally, using Amazon SQS with multiple subscriptions can
provide increased resiliency by allowing multiple copies of the same message to be processed in parallel.
upvoted 5 times
6 months, 4 weeks ago
of course, the answer is D
upvoted 3 times
Highly Voted
7 months, 3 weeks ago
D. SNS Fan Out Pattern https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html (A is wrong Kinesis Analysis does not 'persist' by
itself.)
upvoted 14 times
Most Recent
1 week ago
D makes more sense!
Keyword: Decoupling= SQS
upvoted 1 times
1 week, 3 days ago
Selected Answer: D
Option A: It is more suitable for real-time analytics and processing of streaming data rather than decoupling and scaling message ingestion and
consumption.
Community vote distribution
D (78%)
A (18%)
3%
Option B: It may help with scalability to some extent, but it doesn't provide decoupling.
Option C: It is a valid option, but it lacks the decoupling aspect. In this approach, the consumer applications would still need to read directly from
DynamoDB, creating tight coupling between the ingestion and consumption processes.
Option D: It is the recommended solution for decoupling and scalability. The ingestion application can publish messages to an SNS topic, and
multiple consumer apps can subscribe to the relevant SQS queues. SNS ensures that each message is delivered to all subscribed queues, allowing
the consuming apps to independently process the messages at their own pace and scale horizontally as needed. This provides loose coupling,
scalability, and fault tolerance, as the queues can handle message spikes and manage the consumption rate based on the consumer's processing
capabilities.
upvoted 1 times
1 week, 4 days ago
Selected Answer: D
If its says decouple, then is SQS
upvoted 1 times
1 week, 5 days ago
Selected Answer: A
SNS and SQS still have an standard limit of tails under 3000 messages per sec, because SNS and SQS still have an standard limit of tails under 3000
messages per sec. It does not accomplishes with the requirement.
Perharps it's a possible solution: Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages, but it
requires that change the arch of this app, but this point is no matter on ther requirement.
upvoted 1 times
3 weeks, 1 day ago
Selected Answer: D
Kinesis Data Analytics is for querying i think D is more likely the better answer
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: D
SNS+SQS fan out makes more sense here. Answer D.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: D
Keyword: Decoupling= SQS
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: D
Confusing, but I'm going with my initial guess: It's D:
For most standard queues (depending on queue traffic and message backlog), there can be a maximum of approximately 120,000 in flight
messages (received from a queue by a consumer, but not yet deleted from the queue). If you reach this quota while using short polling, Amazon
SQS returns the OverLimit error message. If you use long polling, Amazon SQS returns no error messages. To avoid reaching the quota, you should
delete messages from the queue after they're processed.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-queues.html
upvoted 1 times
1 month, 1 week ago
Chat GPT says C is the correct answer here
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
Option A suggests persisting the messages to Amazon Kinesis Data Analytics, which focuses on analytics and processing rather than decoupling
and scalability.
Option D is correct because-
By publishing the incoming messages to an SNS topic, you can ensure that multiple consumer applications and microservices can subscribe to the
topic to receive the messages independently. The decoupling allows the producer application to send messages without being tightly coupled to
the consumers.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
D is correct
upvoted 1 times
1 month, 2 weeks ago
reponse: D
upvoted 1 times
2 months, 1 week ago
Selected Answer: D
By using the fanout pattern SNS + SQS, you can easily decouple your applications
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
upvoted 2 times
2 months, 2 weeks ago
Correct Answer :D Unlimited Throughput: Standard queues support a nearly unlimited number of transactions per second (TPS) per API action. only
FIFO has limited 30000 message persecond but you can use standard sqs
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Keywords:
- The number of messages varies drastically
- Sometimes increases suddenly to 100,000 each second
A: Incorrect - Don't confuse between Kinesis Data Analytics and Kinesis Data Stream =)) Kinesis Data Analytics will get the data from Kinesis Data
Stream or Kinesis Data FireHose or MSK (Managed Stream for apache Kafka) for analytic purpose. It can not consume message and send to
applications.
B: Incorrect - Base on the keywords -> Auto Scaling group not scale well because it need time to check the CPU metric and need time to start up
the EC2 and the messages varies drastically. Example: we have to scale from 10 to 100 EC2. Our servers may be down a while when it was scaling.
C: Incorrect - Kinesis Data Streams can handle this case but we should increase the more shards but not single shard.
D: Correct: We can handle high workload well with fan-out pattern SNS + multiple SQS -> This is good for use case:
- The number of messages varies drastically
- Sometimes increases suddenly to 100,000 each second
upvoted 10 times
2 months, 2 weeks ago
oh... I confused between Kinesis Data Analytics and Kinesis Data Stream as you mentioned... I solved several this type of questions, but SNS is
always about 'notification', so i choose A. but i think Kinesis Data Analytics is just wrong, so D is most correct answer.
upvoted 1 times
Topic 1
Question #8
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary
server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes
resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
A. Con gure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Con gure EC2 Auto Scaling to use scheduled scaling.
B. Con gure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon
EC2 instances that are managed in an Auto Scaling group. Con gure EC2 Auto Scaling based on the size of the queue.
C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Con gure
AWS CloudTrail as a destination for the jobs. Con gure EC2 Auto Scaling based on the load on the primary server.
D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Con gure
Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Con gure EC2 Auto Scaling based on the load on the
compute nodes.
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
A - incorrect: Schedule scaling policy doesn't make sense.
C, D - incorrect: Primary server should not be in same Auto Scaling group with compute nodes.
B is correct.
upvoted 44 times
7 months, 3 weeks ago
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
upvoted 4 times
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
The answer seems to be B for me:
A: doesn't make sense to schedule auto-scaling
C: Not sure how CloudTrail would be helpful in this case, at all.
D: EventBridge is not really used for this purpose, wouldn't be very reliable
upvoted 13 times
Most Recent
1 week, 3 days ago
Selected Answer: B
Configuring an Amazon SQS queue as a destination for the jobs, implementing compute nodes with EC2 instances managed in an Auto Scaling
group, and configuring EC2 Auto Scaling based on the size of the queue is the most suitable solution. With this approach, the primary server can
enqueue jobs into the SQS queue, and the compute nodes can dynamically scale based on the size of the queue. This ensures that the compute
capacity adjusts according to the workload, maximizing resiliency and scalability. The SQS queue acts as a buffer, decoupling the primary server
from the compute nodes and providing fault tolerance in case of failures or spikes in the workload.
upvoted 2 times
1 week, 5 days ago
Selected Answer: B
The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over
time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
Configure scaling based on Amazon SQS
Tasks to do:
Step 1: Create a CloudWatch custom metric
Step 2: Create a target tracking scaling policy
Step 3: Test your scaling policy
upvoted 1 times
1 month ago
It is B, no discusion
Community vote distribution
B (94%)
3%
upvoted 1 times
1 month ago
Selected Answer: B
B. SQS with EC2 ASG based on the queue size. This scales with the varying load.
EventBridge and CloudTrail are not really suited to this application. ASG with a schedule does not work if the incoming jobs do not follow a
schedule.
upvoted 1 times
1 month, 1 week ago
correct answer B as destination EC2 is based on the SQS size which serves variable load. So, ASG can scale out EC2 instances when a number of
messages in the queue starts to go beyond a particular threshold and scale in when the queue size goes down the threshold.
upvoted 1 times
1 month, 1 week ago
can someone justify why D is not correct Ans, it talks about scaling based on the load of the compute nodes, which I think is a reliable indicator for
scaling in and out.
B seems correct but its using SQS queue for auto scaling, which might not be the right choice here, would love to know your thoughts.
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: B
Eliminated wrong ones ACD
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
1 month, 2 weeks ago
B - Use SQS queue and then.....Configure EC2 Auto Scaling based on the size of the queue.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
B using SQS seems to be th right answer
upvoted 1 times
2 months ago
Selected Answer: B
I think it makes more sense among the answers.
C&D seems not correct answer
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
keywords:
- Legacy platform consists of a primary server that coordinates jobs across multiple compute nodes.
- Maximizes resiliency and scalability.
A: Incorrect - the question don't mention about schedule for high workload. So we don't use scheduled scaling for this case.
B: Correct - SQS can keep your message in the queue in case of high workload and if it too high we can increase the EC2 instance base on size of
the queue.
C: Incorrect - AWS CloudTrail is API logs it is use for audit log of AWS user activity.
D: Incorrect - Event Bridge is use for filter event and trigger event.
upvoted 9 times
2 months, 3 weeks ago
Selected Answer: B
B.
Explanation:
To maximize resiliency and scalability, the best solution is to use an Amazon SQS queue as a destination for the jobs. This decouples the primary
server from the compute nodes, allowing them to scale independently. This also helps to prevent job loss in the event of a failure.
Using an Auto Scaling group of Amazon EC2 instances for the compute nodes allows for automatic scaling based on the workload. In this case, it's
recommended to configure the Auto Scaling group based on the size of the Amazon SQS queue, which is a better indicator of the actual workload
than the load on the primary server or compute nodes. This approach ensures that the application can handle variable workloads, while also
minimizing costs by automatically scaling up or down the compute nodes as needed.
upvoted 4 times
2 months, 3 weeks ago
Selected Answer: B
Key words: maximizes resiliency and scalability
SQS: primary server can distribute jobs to multiple compute nodes.
upvoted 1 times
2 months, 4 weeks ago
A - incorrect: Schedule scaling policy doesn't make sense.
C, D - incorrect: Primary server should not be in same Auto Scaling group with compute nodes.
upvoted 1 times
Topic 1
Question #9
A company is running an SMB le server in its data center. The le server stores large les that are accessed frequently for the rst few days after
the les are created. After 7 days the les are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available
storage space without losing low-latency access to the most recently accessed les. The solutions architect must also provide le lifecycle
management to avoid future storage issues.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB le server to AWS.
B. Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier
Deep Archive after 7 days.
C. Create an Amazon FSx for Windows File Server le system to extend the company's storage space.
D. Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible
Retrieval after 7 days.
Correct Answer:
D
Highly Voted
8 months, 3 weeks ago
Answer directly points towards file gateway with lifecycles, https://docs.aws.amazon.com/filegateway/latest/files3/CreatingAnSMBFileShare.html
D is wrong because utility function is vague and there is no need for flexible storage.
upvoted 33 times
7 months ago
Yes it might be vague but how do we keep the low-latency access that only flexible can offer?
upvoted 2 times
Highly Voted
6 months, 3 weeks ago
Selected Answer: B
B answwer is correct. low latency is only needed for newer files. Additionally, File GW provides low latency access by caching frequently accessed
files locally so answer is B
upvoted 16 times
Most Recent
1 day, 8 hours ago
Selected Answer: B
A. Isn't a good option because it doesn't address the cost-saving requirement
B. Correct option, but there are no requirements specified about the access pattern of the data. So S3 AI might be preferable here.
C. This one is not possible for an on-prem solution.
D. Vierd option, I cannot google this tool. Or we are talking about AWS CLI and customization scripts... I won't go with it.
upvoted 1 times
1 week, 3 days ago
Selected Answer: B
Option B: Creating an Amazon S3 File Gateway is the recommended solution. It allows the company to extend their storage space by utilizing
Amazon S3 as a scalable and durable storage service. The File Gateway provides low-latency access to the most recently accessed files by
maintaining a local cache on-premises. The S3 Lifecycle policy can be configured to transition data to S3 Glacier Deep Archive after 7 days, which
provides long-term archival storage at a lower cost. This approach addresses both the storage capacity issue and the need for file lifecycle
management.
upvoted 1 times
3 weeks, 2 days ago
This question is tricky...Because, the B and D sounds good, however, you can not transition to S3 Glacier after 7 days, you have to wait at least 90
days for the D case, and 180 Days for the B...the A and C option are more suitable, but the thing is, the C option not mention about lifecycle policy
and the A option there is not a thing to extend the storage...Then....I think the answer should be the A, because you are doing like a back up
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: B
Option B is right answer.
upvoted 1 times
4 weeks, 1 day ago
Community vote distribution
B (89%)
11%
Selected Answer: B
B is the correct answer
upvoted 1 times
1 month ago
B: correct, D:wrong ( why should install a utility to access S3? I have never seen this. The purpose from this question is how to find a solution (S3
Gateway) and use the fileshare and then lifecycycle policy.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
File gateway provides caching and low latency access to the S3 bucket. S3 lifecycle policy can transfer file to Glacier after 7 days.
upvoted 1 times
1 month, 1 week ago
D is the right answer. In Question it says that files are accessed very rarely after 7 Days. But however, the main requirement is not to loose low-
latency. So "S3 Glacier Flexible Retrieval" is the right answer.
B is wrong answer for another reason - S3 File gateway cannot access Glacier.
upvoted 1 times
1 month, 1 week ago
B is the correct answer , recently added items are cached at on-premesis while other can be stored at s3.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
2 months ago
Selected Answer: B
B it is.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: D
without losing low-latency is key to answering the question.
upvoted 2 times
1 month, 1 week ago
the question did mention that low-latency only for new files. not the old one.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Keywords:
- After 7 days the files are rarely accessed.
-The total data size is increasing and is close to the company's total storage capacity.
- Increase the company's available storage space without losing low-latency access to the most recently accessed files. -> (for rarely accessed files
we can access it with high-latency)
- Must also provide file lifecycle management to avoid future storage issues.
A: Incorrect - Don't mention how to increase company's available storage space.
B: Correct - extend storage space and fast access with S3 File Gateway (cache recent access file), reduce cost and storage by move to S3 Glacier
Deep Archive after 7 days.
C: Incorrect - Didn't handle file lifecycle management.
D: Incorrect - Don't mention about increase the company's available storage space.
upvoted 8 times
2 months, 3 weeks ago
Selected Answer: B
Explanation:
Since the company needs to increase available storage space while maintaining low-latency access to recently accessed files and implement file
lifecycle management to avoid future storage issues, the best solution is to use Amazon S3 with a File Gateway.
Using an Amazon S3 File Gateway, the company can access its SMB file server through an S3 bucket. This provides low-latency access to recently
accessed files by caching them on the gateway appliance. The solution also supports file lifecycle management by using S3 Lifecycle policies to
transition files to lower cost storage classes after they haven't been accessed for a certain period of time.
In this case, the company can create an S3 Lifecycle policy to transition files to S3 Glacier Deep Archive after 7 days of not being accessed. This
would allow the company to store large amounts of data at a lower cost, while still having easy access to recently accessed files.
upvoted 3 times
2 months, 3 weeks ago
Selected Answer: B
B: lower latency by accessing caching on local and life cycle for files after 7 days in S3
upvoted 3 times
2 months, 3 weeks ago
Key words: low latency; life cycle.
upvoted 1 times
Topic 1
Question #10
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway
REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
A. Use an API Gateway integration to publish a message to an Amazon Simple Noti cation Service (Amazon SNS) topic when the application
receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application
receives an order. Con gure the SQS FIFO queue to invoke an AWS Lambda function for processing.
C. Use an API Gateway authorizer to block any requests while the application processes an order.
D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the
application receives an order. Con gure the SQS standard queue to invoke an AWS Lambda function for processing.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
B because FIFO is made for that specific purpose
upvoted 43 times
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
Should be B because SQS FIFO queue guarantees message order.
upvoted 22 times
Most Recent
1 week ago
FIFO for sure, the correct is B
upvoted 1 times
1 week, 2 days ago
Selected Answer: B
First-in First-out strictly order
upvoted 1 times
1 week, 3 days ago
Selected Answer: B
Option B: Using an API Gateway integration to send a message to an Amazon SQS FIFO queue is the recommended solution. FIFO queues in
Amazon SQS guarantee the order of message delivery, ensuring that orders are processed in the order they are received. By configuring the SQS
FIFO queue to invoke an AWS Lambda function for processing, the application can process the orders sequentially while maintaining the order in
which they were received.
upvoted 2 times
1 week, 5 days ago
Selected Answer: B
B: because FIFO is made for that specific purpose. It's important takes in consideration that the orders must be processed in the order received.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: B
Option B meets the requiremets.
upvoted 1 times
4 weeks ago
B is the correct answer, just remember FIFO for ordered messages
upvoted 1 times
1 month, 1 week ago
sorry it cant be A. You gotta read the specs. You need Fifo. but sns fifo dont work without SQS FIFO configured. here is the docs view
"To preserve strict message ordering, Amazon SNS restricts the set of supported delivery protocols for Amazon SNS FIFO topics. Currently, the
endpoint protocol must be Amazon SQS, with an Amazon SQS FIFO queue's Amazon Resource Name (ARN) as the endpoint."
also
Community vote distribution
B (99%)
"To fan out messages from Amazon SNS FIFO topics to AWS Lambda functions, extra steps are required. First, subscribe Amazon SQS FIFO queues
to the topic. Then configure the queues to trigger the functions"
answer is B for the win
upvoted 1 times
1 month, 1 week ago
the most suitable item is B. However, , my understanding is that SQS can not invoke a lambda function. Lambda should be invoked by other means,
maybe Eventbridge schedule or something else, and then it can check the poll message on queue. Experts request your confirmation.
upvoted 1 times
1 month, 1 week ago
SQS queues sure can be configured to trigger functions
upvoted 3 times
1 month, 1 week ago
B is correct answer
upvoted 1 times
1 month, 2 weeks ago
reponse : B
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
B is correct. processed in the order that they are received = SQS FIFO.
upvoted 1 times
1 month, 3 weeks ago
May i have this pdf and send to me 409762370@qq.com
upvoted 1 times
2 months ago
Selected Answer: B
I am new to this site and lost....Will appreciate if someone can explain why is "correct answer" shown in the solution always wrong?
upvoted 6 times
1 month, 3 weeks ago
Read discussions mate
upvoted 2 times
2 months ago
Selected Answer: B
Amazon SNS FIFO Can only have SQS FIFO queues as subscribers which is not the case of answer C
upvoted 2 times
2 months, 1 week ago
Selected Answer: B
B: Correct - SQS FIFO will help message process in order. FIFO -> first in first out.
upvoted 2 times
Topic 1
Question #11
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the
database by using user names and passwords that are stored locally in a le. The company wants to minimize the operational overhead of
credential management.
What should a solutions architect do to accomplish this goal?
A. Use AWS Secrets Manager. Turn on automatic rotation.
B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate
the credential le to the S3 bucket. Point the application to the S3 bucket.
D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2
instance. Migrate the credential le to the new EBS volume. Point the application to the new EBS volume.
Correct Answer:
B
Highly Voted
8 months, 3 weeks ago
Selected Answer: A
B is wrong because parameter store does not support auto rotation, unless the customer writes it themselves, A is the answer.
upvoted 54 times
8 months ago
READ!!! AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources.
This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
https://aws.amazon.com/cn/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/ y
https://aws.amazon.com/secrets-manager/?nc1=h_ls
upvoted 14 times
1 month ago
Read this - https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_parameterstore.html
It says SSM Parameter store cant rotate automatically.
upvoted 1 times
6 months, 3 weeks ago
correct. see link https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/ for differences between SSM Parameter
Store and AWS Secrets Manager
upvoted 12 times
6 months, 3 weeks ago
That was a fantastic link. This part of their site "comparison of AWS services" is superb. Thanks.
upvoted 5 times
8 months, 1 week ago
ty bro, I was confused about that and you just mentioned the "key" phrase, B doesn't support autorotation
upvoted 1 times
Highly Voted
6 months ago
Admin is trying to fail everybody in the exam.
upvoted 30 times
2 months ago
He wants you to read discussion part as well for better understanding
upvoted 1 times
2 months, 4 weeks ago
RIGHT? I found a bunch of "correct" answers on here are not really correct, but they're not corrected? hhmmmmm
upvoted 1 times
Most Recent
1 week, 3 days ago
Selected Answer: A
Option A: Using AWS Secrets Manager and enabling automatic rotation is the recommended solution for minimizing the operational overhead of
credential management. AWS Secrets Manager provides a secure and centralized service for storing and managing secrets, such as database
credentials. By leveraging Secrets Manager, the application can retrieve the database credentials programmatically at runtime, eliminating the need
Community vote distribution
A (96%)
3%
to store them locally in a file. Enabling automatic rotation ensures that the database credentials are regularly rotated without manual intervention,
enhancing security and compliance.
upvoted 4 times
2 weeks, 1 day ago
Selected Answer: A
A is a right choice
upvoted 1 times
3 weeks, 2 days ago
A is the right choice...Because, in Both you can force to rotate or delete de secrets, but, in secrets manager you can use a lambda to generate
automatic secrets, also, it´s integrated with RDS, like Aurora...and it´s mostly used for that propouse. You can use parameter store, but you have to
update the parameters by yourself
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: A
Option A meets this goal.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
A is the most logial answer.
upvoted 1 times
1 month, 2 weeks ago
A is correct
upvoted 1 times
2 months ago
Key term used is with less operational overhead, systems parameter store allows you to do the same at no additional cost and where Secrets
manager charges 0.40 per secret. Nowhere in the text does is say that autorotation is a must and furthermore, enabling auto rotation is available of
parameter store..
upvoted 1 times
2 months ago
Selected Answer: A
From the AWS doc it clearly saying choose `AWS Secrets manager` if it requires auto rotation
```
To implement password rotation lifecycles, use AWS Secrets Manager. You can rotate, manage, and retrieve database credentials, API keys, and
other secrets throughout their lifecycle using Secrets Manager. For more information, see What is AWS Secrets Manager? in the AWS Secrets
Manager User Guide.
```
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
upvoted 2 times
2 months ago
Selected Answer: A
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You
can store data such as passwords, database strings, and license codes as parameter values. However, Parameter Store doesn't provide automatic
rotation services for stored secrets. Instead, Parameter Store enables you to store your secret in Secrets Manager, and then reference the secret as
a Parameter Store parameter.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_parameterstore.html
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
Definitely A
upvoted 1 times
2 months, 1 week ago
A is correct:
Secrets Manager: It was designed specifically for confidential information (like database credentials, API keys) that needs to be encrypted, so the
creation of a secret entry has encryption enabled by default. It also gives additional functionality like rotation of keys.
upvoted 1 times
2 months, 1 week ago
B sounds logical there is no question saying that auto rotation is supposed to be a key component though
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
operational overhead of credential management is the key term which means secret rotation and encryption and these are features of Secret
Manager.
Although Aurora doesn't have built-in integration with Secret Manager but focus on Keyword of the question gives an Answer = A
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Keywords:
- User names and passwords that are stored locally in a file -> minimize the operational overhead of credential management. (Improve security
with lowest operational overhead)
A: Correct - with AWS Secret Manager the the username and password will be encrypted with KMS and it also has the automatic rotation.
B: Incorrect - AWS Systems Manager Parameter Store don't have automatic rotation
C: Incorrect - Actually, we can apply this but we will do a lot of things compare to AWS Secret Manager.
D: Incorrect - We can do this but it is required more operational overhead than AWS Secret Manager
upvoted 4 times
Topic 1
Question #12
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static
data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce
latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Con gure Route 53 to route tra c to the
CloudFront distribution.
B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint Con gure Route 53 to route tra c to the CloudFront distribution.
C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that
has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the
custom domain name as an endpoint for the web application.
D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has
the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the
other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.
Correct Answer:
C
Highly Voted
7 months, 2 weeks ago
Answer is A
Explanation - AWS Global Accelerator vs CloudFront
• They both use the AWS global network and its edge locations around the world
• Both services integrate with AWS Shield for DDoS protection.
• CloudFront
• Improves performance for both cacheable content (such as images and videos)
• Dynamic content (such as API acceleration and dynamic site delivery)
• Content is served at the edge
• Global Accelerator
• Improves performance for a wide range of applications over TCP or UDP
• Proxying packets at the edge to applications running in one or more AWS Regions.
• Good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP
• Good for HTTP use cases that require static IP addresses
• Good for HTTP use cases that required deterministic, fast regional failover
upvoted 57 times
4 months, 3 weeks ago
By creating a CloudFront distribution that has both the S3 bucket and the ALB as origins, the company can reduce latency for both the static
and dynamic data. The CloudFront distribution acts as a content delivery network (CDN), caching the data closer to the users and reducing the
latency. The company can then configure Route 53 to route traffic to the CloudFront distribution, providing improved performance for the web
application.
upvoted 3 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: A
Q: How is AWS Global Accelerator different from Amazon CloudFront?
A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world.
CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and
dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge
to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT),
or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services
integrate with AWS Shield for DDoS protection.
upvoted 17 times
Most Recent
2 days, 23 hours ago
Answer is C
As the company wants to improve both static and dynamic delivery.
Check the screen shot in the following article. It shows the structure of this scenario.
https://tutorialsdojo.com/aws-global-accelerator-vs-amazon-cloudfront/
upvoted 1 times
1 week, 3 days ago
Community vote distribution
A (78%)
C (23%)
Selected Answer: A
Option A: Creating an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins is a valid approach. CloudFront is a content
delivery network (CDN) that caches and delivers content from edge locations, improving performance and reducing latency. By configuring
CloudFront to have the S3 bucket as an origin for static data and the ALB as an origin for dynamic data, the company can benefit from CloudFront's
caching and distribution capabilities. Routing traffic to the CloudFront distribution through Route 53 ensures that requests are directed to the
nearest edge location, further enhancing performance and reducing latency.
upvoted 2 times
2 weeks, 1 day ago
Selected Answer: A
Makes sense
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: A
Using CloudFront
upvoted 1 times
2 weeks, 3 days ago
Selected Answer: A
A is right
upvoted 1 times
2 weeks, 4 days ago
Selected Answer: A
I was not believing the correct answer was A until I found this:
https://repost.aws/knowledge-center/cloudfront-distribution-serve-content
Create one origin for your S3 bucket, and another origin for your load balancer.
-Create a behavior that specifies a path pattern to route all static content requests to the S3 bucket.
-Edit the Default (*) path pattern behavior and set its Origin as your load balancer.
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: A
I will not explain
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: C
I think its C bcoz the question states both static and dynamic will be accessed with low latency. CF for static page in s3 and GA for alb, GA reduce
hops on network so the latency will be minimal.
upvoted 1 times
4 days, 7 hours ago
bot we can not use global accelerator with cloudfront as endpoints
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
chatgpt says C
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
Correct is A in my opinion
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
2 months ago
C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has
the ALB and the CloudFront distribution as endpoints - you can't use Cloudfront as Accelerator endpoint
upvoted 3 times
2 months, 1 week ago
I think Answer is C
Cloudfront will point to a S3 but static content is in S3, need to use Accelerator for Dynamic content
upvoted 2 times
2 months, 1 week ago
cloudfront allows you to povide lower latency & Accelerator improves performance for a variety of scenarios hence answer is C
upvoted 1 times
2 months, 3 weeks ago
Keywords:
- The web application has static data and dynamic data. Static data in an Amazon S3 bucket.
- Improve performance and reduce latency for the static data and dynamic data.
- The company is using its own domain name registered with Amazon Route 53.
A: Correct - CloudFront has the Edge location and the cache for dynamic and static
B: Incorrect - AWS Global Accelerator don't have cache function, so static file need to be load directly from S3 every time.
- Beside that we configure CloudFront -> ALB, Accelerator -> S3, Route 53 -> CloudFront. It means that all the traffic go to CloudFront only,
Acclerator don't have any traffic.
C: Incorrect - Global Accelerator can configure CloudFront as the endpoint.
D: Incorrect - We already have domain name. Why will we use new domain name? Will we change to new domain name? How everyone know you
new domain name?
upvoted 5 times
Topic 1
Question #13
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the
credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Con gure Secrets
Manager to rotate the secrets on a schedule.
B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the
required Regions. Con gure Systems Manager to rotate the secrets on a schedule.
C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon
CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the
secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate
the secrets.
Correct Answer:
A
Highly Voted
8 months, 3 weeks ago
Selected Answer: A
A is correct.
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
upvoted 17 times
Most Recent
1 week, 3 days ago
Selected Answer: A
Option A: Storing the credentials as secrets in AWS Secrets Manager provides a dedicated service for secure and centralized management of
secrets. By using multi-Region secret replication, the company ensures that the secrets are available in the required Regions for rotation. Secrets
Manager also provides built-in functionality to rotate secrets automatically on a defined schedule, reducing operational overhead. This automation
simplifies the process of rotating credentials for the Amazon RDS for MySQL databases during monthly maintenance activities.
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: A
A is correct answer.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other secrets. When
you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets to a Region is a
security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require replication of secrets across
Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more Regions to support these scenarios.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Keywords:
- rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions
- LEAST operational overhead
A: Correct - AWS Secrets Manager supports
- Encrypt credential for RDS, DocumentDb, Redshift, other DBs and key/value secret.
- multi-region replication.
- Remote base on schedule
B: Incorrect - Secure string parameter only apply for Parameter Store. All the data in AWS Secrets Manager is encrypted
C: Incorrect - don't mention about replicate S3 across region.
D: Incorrect - So many steps compare to answer A =))
upvoted 4 times
3 months ago
Selected Answer: A
Community vote distribution
A (100%)
A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets
Manager to rotate the secrets on a schedule.
This solution is the best option for meeting the requirements with the least operational overhead. AWS Secrets Manager is designed specifically for
managing and rotating secrets like database credentials. Using multi-Region secret replication, you can easily replicate the secrets across the
required AWS Regions. Additionally, Secrets Manager allows you to configure automatic secret rotation on a schedule, further reducing the
operational overhead.
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
A is correct.
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
It's A, as Secrets Manager does support replicating secrets into multiple AWS Regions:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: A
it's A, here the question specify that we want the LEAST overhead
upvoted 2 times
4 months, 2 weeks ago
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
AWS Secrets Manager is a secrets management service that enables you to store, manage, and rotate secrets such as database credentials, API
keys, and SSH keys. Secrets Manager can help you minimize the operational overhead of rotating credentials for your Amazon RDS for MySQL
databases across multiple Regions. With Secrets Manager, you can store the credentials as secrets and use multi-Region secret replication to
replicate the secrets to the required Regions. You can then configure Secrets Manager to rotate the secrets on a schedule so that the credentials
are rotated automatically without the need for manual intervention. This can help reduce the risk of secrets being compromised and minimize the
operational overhead of credential management.
upvoted 3 times
6 months ago
Selected Answer: A
Option A, storing the credentials as secrets in AWS Secrets Manager and using multi-Region secret replication for the required Regions, and
configuring Secrets Manager to rotate the secrets on a schedule, would meet the requirements with the least operational overhead.
AWS Secrets Manager allows you to store, manage, and rotate secrets, such as database credentials, across multiple AWS Regions. By enabling
multi-Region secret replication, you can replicate the secrets across the required Regions to allow for seamless rotation of the credentials during
maintenance activities. Additionally, Secrets Manager provides automatic rotation of secrets on a schedule, which would minimize the operational
overhead of rotating the credentials on a monthly basis.
upvoted 2 times
6 months ago
Option B, storing the credentials as secrets in AWS Systems Manager and using multi-Region secret replication, would not provide automatic
rotation of secrets on a schedule.
Option C, storing the credentials in an S3 bucket with SSE enabled and using EventBridge to invoke an AWS Lambda function to rotate the
credentials, would not provide automatic rotation of secrets on a schedule.
Option D, encrypting the credentials as secrets using KMS multi-Region customer managed keys and storing the secrets in a DynamoDB global
table, would not provide automatic rotation of secrets on a schedule and would require additional operational overhead to retrieve the secrets
from DynamoDB and use the RDS API to rotate the secrets.
upvoted 2 times
6 months ago
vote A !
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
AWS Secret Manager
upvoted 1 times
6 months, 2 weeks ago
A is correct
upvoted 1 times
6 months, 3 weeks ago
Most of these questions have secrets manager as the answer
upvoted 1 times
6 months, 3 weeks ago
rotate credentials is the keyword and systems manager doesn't support rotation. check link
https://tutorialsdojo.com/aws-secrets-manager-vs-systems-manager-parameter-store/
upvoted 1 times
6 months, 2 weeks ago
secrets-manager supports rotational but systems-manager-parameter-store doesn't support
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #14
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2
Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce
application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions.
The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining
high availability.
Which solution will meet these requirements?
A. Use Amazon Redshift with a single node for leader and compute functionality.
B. Use Amazon RDS with a Single-AZ deployment Con gure Amazon RDS to add reader instances in a different Availability Zone.
C. Use Amazon Aurora with a Multi-AZ deployment. Con gure Aurora Auto Scaling with Aurora Replicas.
D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Selected Answer: C
C, AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability = Multi-
AZ deployment
upvoted 24 times
Highly Voted
6 months ago
Selected Answer: C
Option C, using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas, would be the best solution
to meet the requirements.
Aurora is a fully managed, MySQL-compatible relational database that is designed for high performance and high availability. Aurora Multi-AZ
deployments automatically maintain a synchronous standby replica in a different Availability Zone to provide high availability. Additionally, Aurora
Auto Scaling allows you to automatically scale the number of Aurora Replicas in response to read workloads, allowing you to meet the demand of
unpredictable read workloads while maintaining high availability. This would provide an automated solution for scaling the database to meet the
demand of the application while maintaining high availability.
upvoted 7 times
6 months ago
Option A, using Amazon Redshift with a single node for leader and compute functionality, would not provide high availability.
Option B, using Amazon RDS with a Single-AZ deployment and configuring RDS to add reader instances in a different Availability Zone, would
not provide high availability and would not automatically scale the number of reader instances in response to read workloads.
Option D, using Amazon ElastiCache for Memcached with EC2 Spot Instances, would not provide a database solution and would not meet the
requirements.
upvoted 2 times
Most Recent
1 day, 16 hours ago
Selected Answer: C
Option C
upvoted 1 times
1 week, 3 days ago
Selected Answer: C
Option C: Using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas is the most appropriate
solution. Aurora is a MySQL-compatible relational database engine that provides high performance and scalability. With Multi-AZ deployment, the
database is automatically replicated across multiple Availability Zones for high availability. Aurora Auto Scaling allows the database to automatically
add or remove Aurora Replicas based on the workload, ensuring that read requests can be distributed effectively and the database can scale to
meet demand. This provides both high availability and automatic scaling to handle unpredictable read workloads.
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: C
C meets the requirements.
upvoted 1 times
Community vote distribution
C (100%)
1 month ago
C Aurora with read replicas
upvoted 1 times
1 month, 2 weeks ago
Key words:
- Must support MySQL
- High Availability (must be mulit-az)
- Auto Scaling
upvoted 3 times
1 month, 2 weeks ago
Selected Answer: C
C is correct since cost is not a concern.
upvoted 1 times
1 month, 2 weeks ago
It's Aurora with Multi-AZ deployment - Keywords > "unpredictable read workloads while maintaining high availability"
upvoted 1 times
1 month, 2 weeks ago
To automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability, you can use Amazon
Aurora with a Multi-AZ deployment. Aurora is a fully managed, MySQL-compatible database service that can automatically scale up or down based
on workload demands. With a Multi-AZ deployment, Aurora maintains a synchronous standby replica in a different Availability Zone (AZ) to
provide high availability in the event of an outage.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: C
Keywords:
- The database's performance degrades quickly as application load increases.
- The application handles more read requests than write transactions.
- Automatically scale the database to meet the demand of unpredictable read workloads
- Maintaining high availability.
A: Incorrect - Amazon Redshift is used columnar block storage which useful Data Analytic and warehouse.
It also have the issue when migrate from MySql to Redshift: storage procedure, trigger,.. Single node for leader don't maintaining high availability.
B: Incorrect - The requirement said that: "Automatically scale the database to meet the demand of unpredictable read workloads" -> missing auto
scaling.
C: Correct - it's resolved the issue high availability and auto scaling.
D: Incorrect - Stop instance don't maintaining high availability.
upvoted 5 times
2 months, 3 weeks ago
Selected Answer: C
Amazon Aurora is a relational database engine that is compatible with MySQL and PostgreSQL. It is designed for high performance, scalability, and
availability. With a Multi-AZ deployment, Aurora automatically replicates the database to a standby instance in a different Availability Zone. This
provides high availability and fast failover in case of a primary instance failure.
Aurora Auto Scaling allows you to add or remove Aurora Replicas based on CPU utilization, connections, or custom metrics. This enables you to
automatically scale the read capacity of the database in response to application load. Aurora Replicas are read-only instances that can offload read
traffic from the primary instance. They are kept in sync with the primary instance using Aurora's distributed storage architecture, which enables
low-latency updates across the replicas.
upvoted 1 times
3 months ago
Selected Answer: C
Option C: Using Amazon Aurora with a Multi-AZ deployment and configuring Aurora Auto Scaling with Aurora Replicas will provide both read
scalability and high availability. Aurora is a MySQL-compatible database that is designed to handle high read workloads. With Aurora's Multi-AZ
deployment, a replica will be created in a different Availability Zone for disaster recovery purposes. Aurora Replicas can also be used to scale read
workloads by adding read replicas.
upvoted 1 times
4 months ago
Selected Answer: C
Right Answer C.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: C
Amazon Aurora
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
C because other answers are not a good-fit for the question
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
To automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability, you can use Amazon
Aurora with a Multi-AZ deployment. Aurora is a fully managed, MySQL-compatible database service that can automatically scale up or down based
on workload demands. With a Multi-AZ deployment, Aurora maintains a synchronous standby replica in a different Availability Zone (AZ) to
provide high availability in the event of an outage.
upvoted 1 times
Topic 1
Question #15
A company recently migrated to AWS and wants to implement a solution to protect the tra c that ows in and out of the production VPC. The
company had an inspection server in its on-premises data center. The inspection server performed speci c operations such as tra c ow
inspection and tra c ltering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?
A. Use Amazon GuardDuty for tra c inspection and tra c ltering in the production VPC.
B. Use Tra c Mirroring to mirror tra c from the production VPC for tra c inspection and ltering.
C. Use AWS Network Firewall to create the required rules for tra c inspection and tra c ltering for the production VPC.
D. Use AWS Firewall Manager to create the required rules for tra c inspection and tra c ltering for the production VPC.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
I agree with C.
**AWS Network Firewall** is a stateful, managed network firewall and intrusion detection and prevention service for your virtual private cloud (VPC)
that you created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This
includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect.
upvoted 21 times
8 months, 2 weeks ago
And I'm not sure Traffic Mirroring can be for filtering
upvoted 3 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: C
I would recommend option C: Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production
VPC.
AWS Network Firewall is a managed firewall service that provides filtering for both inbound and outbound network traffic. It allows you to create
rules for traffic inspection and filtering, which can help protect your production VPC.
Option A: Amazon GuardDuty is a threat detection service, not a traffic inspection or filtering service.
Option B: Traffic Mirroring is a feature that allows you to replicate and send a copy of network traffic from a VPC to another VPC or on-premises
location. It is not a service that performs traffic inspection or filtering.
Option D: AWS Firewall Manager is a security management service that helps you to centrally configure and manage firewalls across your accounts.
It is not a service that performs traffic inspection or filtering.
upvoted 19 times
Most Recent
1 week, 3 days ago
Selected Answer: C
AWS Network Firewall is a managed network firewall service that allows you to define firewall rules to filter and inspect network traffic. You can
create rules to define the traffic that should be allowed or blocked based on various criteria such as source/destination IP addresses, protocols,
ports, and more. With AWS Network Firewall, you can implement traffic inspection and filtering capabilities within the production VPC, helping to
protect the network traffic.
In the context of the given scenario, AWS Network Firewall can be a suitable choice if the company wants to implement traffic inspection and
filtering directly within the VPC without the need for traffic mirroring. It provides an additional layer of security by enforcing specific rules for traffic
filtering, which can help protect the production environment.
upvoted 2 times
1 week, 4 days ago
Anyone with the contributor access, kindly help me. I'm in need of the last set of questions as a means of retake preparations.
upvoted 1 times
3 weeks, 3 days ago
B is correct answer
upvoted 1 times
3 weeks, 5 days ago
Selected Answer: B
Community vote distribution
C (93%)
7%
option B with Traffic Mirroring is the most suitable solution for mirroring the traffic from the production VPC to an inspection instance or tool,
allowing you to perform traffic inspection and filtering as required.
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
C is correct as the option uses AWS services to fully meet the requirement.
Has the question not been asking "in the AWS cloud", option B could be a correct option too, but a costlier one though as the user has to pay for
network data for every bit of traffic replication between AWS cloud and on-prem location.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
Traffic Mirroring will allow you to inspect and filter traffic using a server, (note company had a on-premise server for Traffic filtering )
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
Option B, using Traffic Mirroring, is the most appropriate solution. Traffic Mirroring allows you to capture and forward network traffic from an
Amazon VPC to an inspection instance or service for analysis and filtering. By mirroring the traffic from the production VPC, you can send it to an
inspection server or a dedicated service that performs the required traffic flow inspection and filtering, replicating the functionalities of the on-
premises inspection server.
upvoted 1 times
3 weeks, 5 days ago
Yes, so says chatgpt
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: C
C is correct
upvoted 1 times
1 month, 2 weeks ago
Network Firewall is for inspection and traffic filtering.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-filters.html
A traffic mirror filter is a set of inbound and outbound rules that determines which traffic is copied from the traffic mirror source and sent to the
traffic mirror target. You can also choose to mirror certain network services traffic, including Amazon DNS. When you add network services traffic,
all traffic (inbound and outbound) related to that network service is mirrored.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Traffic Mirroring is a solution that allows you to copy network traffic from one network interface of an EC2 instance to another for further analysis.
This solution can be used to implement traffic inspection and filtering in the AWS Cloud, and it is particularly suitable for scenarios where an
existing traffic inspection server is already in place, such as in this case. By using Traffic Mirroring, the company can replicate the same
functionalities of its on-premises inspection server in the AWS Cloud.
Option C, AWS Network Firewall, is a managed network firewall service that provides network traffic inspection and filtering rules. It can be used to
inspect and filter traffic, but it requires additional configuration to be implemented effectively.
upvoted 2 times
2 months, 1 week ago
Traffic Mirroring can't FILTER
upvoted 2 times
1 month, 3 weeks ago
FILTER traffic is an optional parameter in Traffic mirroring.
upvoted 1 times
1 month ago
But the filter is a filter applied to the traffic to be mirrored.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
Key words: inspection and traffic filtering
upvoted 1 times
3 months, 1 week ago
Is this Network Firewall the same thing as NACL (Network Access Control List) for the VPC ?
upvoted 3 times
4 months ago
Selected Answer: C
C is correct. AWS Network Firewall supports both inspection and filtering as required.
B is incorrect because Traffic Mirroring only for inspection.
upvoted 2 times
4 months ago
Option B, using Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering, is the most appropriate solution for
the company's requirements. Traffic Mirroring allows the company to replicate network traffic to an Amazon Elastic Compute Cloud (Amazon EC2)
instance or an Amazon Partner Network (APN) partner for inspection and filtering. The inspection server can be set up in an EC2 instance, and
traffic from the production VPC can be mirrored to this instance for inspection and filtering, similar to how the on-premises inspection server
operated. This solution allows the company to maintain the same functionalities they had on-premises and also provides them with greater
flexibility and scalability in the AWS Cloud.
upvoted 2 times
Topic 1
Question #16
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a
reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team
should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate IAM roles.
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data.
Share the dashboards with the appropriate users and groups.
C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce
reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS
for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the
reports.
Correct Answer:
D
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html
upvoted 49 times
8 months, 2 weeks ago
https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html
^ more percise link
upvoted 9 times
8 months, 2 weeks ago
Agree with you
upvoted 2 times
Highly Voted
2 months, 3 weeks ago
Selected Answer: B
Keywords:
- Data lake on AWS.
- Consists of data in Amazon S3 and Amazon RDS for PostgreSQL.
- The company needs a reporting solution that provides data VISUALIZATION and includes ALL the data sources within the data lake.
A - Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists without
QuickSight. QuickSight don't support IAM. We use users and groups to view the QuickSight dashboard
B - Correct: as explained in answer A and QuickSight is used to created dashboard from S3, RDS, Redshift, Aurora, Athena, OpenSearch, Timestream
C - Incorrect: This way don't support visulization and don't mention how to process RDS data
D - Incorrect: This way don't support visulization and don't mention how to combine data RDS and S3
upvoted 12 times
Most Recent
2 days, 23 hours ago
Answer is B
Dashboard cannot be shared with roles.
https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html
upvoted 1 times
1 week, 3 days ago
Selected Answer: B
B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share
the dashboards with the appropriate users and groups.
Amazon QuickSight is a business intelligence (BI) tool provided by AWS that allows you to create interactive dashboards and reports. It supports a
variety of data sources, including Amazon S3 and Amazon RDS for PostgreSQL, which are the data sources in the company's data lake.
Option A (Create an analysis in Amazon QuickSight and share with IAM roles) is incorrect because it suggests sharing with IAM roles, which are
more suitable for managing access to AWS resources rather than granting access to specific users or groups within QuickSight.
Community vote distribution
B (76%)
14%
9%
upvoted 2 times
2 weeks, 4 days ago
Selected Answer: C
B: is wrong because Quicksight has to load all datasets from S3 into SPICE which is expensive and impossible for a whole data lake. Question says
report contains all data from the lake.
D: Is wrong, because Athena does not allow generating reports except as file, which does not have visualizations
C: Glue can access all available sources (RDS and S3), perform aggregation and using the driver to generate Visualization with Python and storing it
as PDF on S3
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: D
There is something wird here.....Quicksight is a good option, however let´s see that in the question says that the management group is the only one
with full acces, and for that you need IAM roles, because the groups just apply within Quisight, also, the groups that can see the dashboard, have
the ability to see the underlying data.....IMPORTANT, far I know, you dont have to create a new dataset to show a dashboard, algo, quicksight CAN
NOT do that, just glue is capable..
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: B
I vote for option B.
upvoted 1 times
1 month ago
tricky question, Users, groups and roles can have access.
Viewing who has access to a dashboard
Use the following procedure to see which users or groups have access to the dashboard.
Open the published dashboard and choose Share at upper right. Then choose Share dashboard.
In the Share dashboard page that opens, under Manage permissions, review the users and groups, and their roles and settings.
You can search to locate a specific user or group by entering their name or any part of their name in the search box at upper right. Searching is
case-sensitive, and wildcards aren't supported. Delete the search term to return the view to all users.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
Amazon QuickSight with users and groups. B is correct.
upvoted 2 times
1 month, 2 weeks ago
Amazon QuickSight uses users and groups and not Iam roles.
upvoted 1 times
2 months ago
D is incorrect i think if we needed to use athena we could have used it with S3 as well cz it is capable of doing so whereas for postgresSQL we
could use federated query. but here it is using Glue, then crawler which i think doesn't make sense.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: B
You can share dashboards and visuals with specific users or groups in your account or with everyone in your Amazon QuickSight account.
upvoted 2 times
2 months, 2 weeks ago
Keywords :
1. visualization -> quicksite
2. The rest of the company should have only limited access -> IAM role, user group
upvoted 1 times
2 months, 3 weeks ago
B is correct
upvoted 1 times
3 months ago
Selected Answer: B
Option B is the correct answer because Amazon QuickSight's sharing mechanism is based on users and groups, not IAM roles. IAM roles are used
for granting permissions to AWS resources, but they are not directly used for sharing QuickSight dashboards.
In option B, you create an analysis in Amazon QuickSight, connect all the data sources (Amazon S3 and Amazon RDS for PostgreSQL), and create
new datasets. After publishing dashboards to visualize the data, you share them with appropriate users and groups. This approach allows you to
control the access levels for different users, such as providing full access to the management team and limited access to the rest of the company.
This solution meets the requirements specified in the question.
upvoted 4 times
3 months ago
Selected Answer: B
Amazon QuickSight is a cloud-based business intelligence (BI) service that makes it easy to create and publish interactive dashboards that include
data visualizations from multiple data sources. By using QuickSight, the company can connect to both Amazon S3 and Amazon RDS for PostgreSQL
and create new datasets that combine data from both sources. The company can then use QuickSight to create interactive dashboards that
visualize the data and provide data insights.
To limit access to the visualizations, the company can use QuickSight's built-in security features. QuickSight allows you to define fine-grained
access control at the user or group level. This way, the management team can have full access to all the visualizations, while the rest of the
company can have only limited access.
upvoted 2 times
3 months, 2 weeks ago
B. Amazon QuickSight as a reporting solution can provide data visualization and reporting capabilities that include all data sources within the data
lake, while also providing different levels of access to different users.
upvoted 1 times
Topic 1
Question #17
A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for
document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.
What should the solutions architect do to meet this requirement?
A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Always remember that you should associate IAM roles to EC2 instances
upvoted 48 times
Highly Voted
6 months ago
Selected Answer: A
The correct option to meet this requirement is A: Create an IAM role that grants access to the S3 bucket and attach the role to the EC2 instances.
An IAM role is an AWS resource that allows you to delegate access to AWS resources and services. You can create an IAM role that grants access to
the S3 bucket and then attach the role to the EC2 instances. This will allow the EC2 instances to access the S3 bucket and the documents stored
within it.
Option B is incorrect because an IAM policy is used to define permissions for an IAM user or group, not for an EC2 instance.
Option C is incorrect because an IAM group is used to group together IAM users and policies, not to grant access to resources.
Option D is incorrect because an IAM user is used to represent a person or service that interacts with AWS resources, not to grant access to
resources.
upvoted 27 times
Most Recent
1 week, 3 days ago
Selected Answer: A
Option A is the correct approach because IAM roles are designed to provide temporary credentials to AWS resources such as EC2 instances. By
creating an IAM role, you can define the necessary permissions and policies that allow the EC2 instances to access the S3 bucket securely.
Attaching the IAM role to the EC2 instances will automatically provide the necessary credentials to access the S3 bucket without the need for
explicit access keys or secrets.
Option B is not recommended in this case because IAM policies alone cannot be directly attached to EC2 instances. Policies are usually attached to
IAM users, groups, or roles.
Option C is not the most appropriate choice because IAM groups are used to manage collections of IAM users and their permissions, rather than
granting access to specific resources like S3 buckets.
Option D is not the optimal solution because IAM users are intended for individual user accounts and are not the recommended approach for
granting access to resources within EC2 instances.
upvoted 2 times
1 month, 1 week ago
IAM Roles manage who/what has access to your AWS resources, whereas IAM policies control their permissions.
Therefore, a Policy alone is useless without an active IAM Role or IAM User.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
always role for ec2 instance
Community vote distribution
A (99%)
upvoted 1 times
2 months, 3 weeks ago
Keywords: EC2 instances can access the S3 bucket.
A: Correct - IAM role is used to grant access for AWS services like EC2, Lambda,...
B: Incorrect - IAM policy only apply for users cannot attach it to EC2 (AWS service).
C: Incorrect - IAM group is used to group of permission and attach to list of users.
D: Incorrect - To make EC2 work we need access key and secret access key but not user account. But even when we use access key and secret
access key of user it's not recommended because anyone can access EC2 instance can get your access key and secret access key and get all
permission from the owner. The secure way is using IAM role which we just specify enough role for EC2 instance.
upvoted 4 times
2 months, 3 weeks ago
A is correct
upvoted 1 times
3 months ago
Selected Answer: A
https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
IAM Role is the correct anwser.
upvoted 1 times
4 months ago
Selected Answer: A
IAM Role is the correct anwser.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
IAM Role
upvoted 1 times
5 months ago
Selected Answer: A
Associate IAM roles to EC2 instances
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
An IAM role is an AWS identity that you can create and use to delegate permissions to AWS resources. To give the EC2 instances access to the S3
bucket, you can create an IAM role that grants the necessary permissions and then attach the role to the instances. This will allow the instances to
access the S3 bucket using the permissions granted by the role.
upvoted 1 times
6 months ago
it's A: Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
upvoted 1 times
6 months, 1 week ago
A is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
How can I grant my Amazon EC2 instance access to an Amazon S3 bucket?
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/
upvoted 1 times
Topic 1
Question #18
An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads
an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an
AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Con gure the S3 bucket to send a noti cation to the SQS queue when an
image is uploaded to the S3 bucket.
B. Con gure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS
message is successfully processed, delete the message in the queue.
C. Con gure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the le name to a text
le in memory and use the text le to keep track of the images that were processed.
D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue,
log the le name in a text le on the EC2 instance and invoke the Lambda function.
E. Con gure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert
to an Amazon ample Noti cation Service (Amazon SNS) topic with the application owner's email address for further processing.
Correct Answer:
AB
Highly Voted
8 months, 2 weeks ago
Selected Answer: AB
It looks like A-B
upvoted 15 times
Highly Voted
6 months ago
Selected Answer: AB
To design a solution that uses durable, stateless components to process images automatically, a solutions architect could consider the following
actions:
Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded. This
allows the application to decouple the image upload process from the image processing process and ensures that the image processing process is
triggered automatically when a new image is uploaded.
Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully
processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the image is not
processed multiple times.
upvoted 12 times
6 months ago
Option C is incorrect because it involves storing state (the file name) in memory, which is not a durable or scalable solution.
Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.
Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an Amazon
Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.
upvoted 8 times
Most Recent
1 week, 3 days ago
Selected Answer: AB
Option A is a correct because it allows for decoupling between the image upload process and image processing. By configuring S3 to send a
notification to SQS, image upload event is recorded and can be processed independently by microservice.
Option B is also a correct because it ensures that Lambda is triggered by messages in SQS. Lambda can retrieve image information from SQS,
process and compress image, and store compressed image in a different S3. Once processing is successful, Lambda can delete processed message
from SQS, indicating that image has been processed.
Option C is not recommended because it introduces a stateful approach by using a text file to keep track of processed images.
Option D is not optimal solution as it introduces unnecessary complexity by involving an EC2 to monitor SQS and maintain a text file.
Community vote distribution
AB (98%)
Option E is not directly related to requirement of processing images automatically. Although EventBridge and SNS can be useful for event
notifications and further processing, they don't provide the same level of durability and scalability as SQS.
upvoted 2 times
1 month, 1 week ago
Selected Answer: AB
Option A nad B
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: AB
A and B
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: AB
Keywords:
- Store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function.
- Durable, stateless components to process the images automatically
A,B: Correct - SQS has message retention function(store message) default 4 days(can increate update 14 days) so that you can re-run lambda if
there are any errors when processing the images.
C: Incorrect - Lambda function just run the request then stop, the max tmeout is 15 mins. So we cannot store data in the ram of Lambda function.
D: Incorrect - we can trigger Lambda dirrectly from SQS no need EC2 instance in this case
E: Incorrect - It kinds of manually step -> the owner has to read email then process it :))
upvoted 3 times
3 months ago
Selected Answer: AB
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image
is uploaded to the S3 bucket.
B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message
is successfully processed, delete the message in the queue.
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: AB
Agree with the general answer. its A+B.
upvoted 1 times
3 months, 4 weeks ago
Why B?
Message gets automatically deleted from queue once it goes out of it. FIFO
upvoted 1 times
3 months, 3 weeks ago
Not deleted but hidden while being processed
upvoted 1 times
4 months ago
Selected Answer: AB
AB definitely Okay
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: AB
AB definitely Okay
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: AB
AB definitely Okay
upvoted 1 times
6 months ago
Selected Answer: AB
Obviously A & B
upvoted 1 times
6 months, 1 week ago
1)SQS + Lambda 2) SQS FIFO + Lambda 3 ) SNS + Lambda
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: AB
A and B looks reasonable
upvoted 1 times
6 months, 2 weeks ago
ok, A and B are the "correct" options given the set that we were provided, but you can simply configure a trigger in the S3 to invoke the lambda
that will process and upload the image... As an architect I would never go the way the solution is presented in this scenario.
upvoted 2 times
7 months ago
AAAAAAAAAABBBBBBBBBB
upvoted 1 times
Topic 1
Question #19
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application
servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual rewall appliance
from AWS Marketplace in an inspection VPC. The appliance is con gured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all tra c to the application before the tra c reaches the
web server.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a Network Load Balancer in the public subnet of the application's VPC to route the tra c to the appliance for packet inspection.
B. Create an Application Load Balancer in the public subnet of the application's VPC to route the tra c to the appliance for packet inspection.
C. Deploy a transit gateway in the inspection VPCon gure route tables to route the incoming packets through the transit gateway.
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and
forward the packets to the appliance.
Correct Answer:
B
Highly Voted
8 months, 3 weeks ago
Answer is D . Use Gateway Load balancer
REF: https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aws-gateway-load-balancer/
upvoted 25 times
Highly Voted
7 months, 3 weeks ago
It's D, Coz.. Gateway Load Balancer is a new type of load balancer that operates at layer 3 of the OSI model and is built on Hyperplane, which is
capable of handling several thousands of connections per second. Gateway Load Balancer endpoints are configured in spoke VPCs originating or
receiving traffic from the Internet. This architecture allows you to perform inline inspection of traffic from multiple spoke VPCs in a simplified and
scalable fashion while still centralizing your virtual appliances.
upvoted 23 times
Most Recent
2 days, 22 hours ago
Answer D -
Gateway Load Balancer ( GWLB )
Primarily used for deploying, scaling, and running third-party virtual appliances.
The virtual appliances can be your custom firewalls, deep packet inspection systems, or intrusion detection and prevention systems in AWS
In this case, the appliance is used as a security system before the web tier.
upvoted 1 times
1 week, 3 days ago
Selected Answer: A
A. Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
By creating a Network Load Balancer (NLB) in the public subnet, you can configure it to forward incoming traffic to the virtual firewall appliance for
inspection. The NLB operates at the transport layer (Layer 4) and can distribute traffic across multiple instances, including the firewall appliance.
This allows you to scale the inspection capacity if needed. The NLB can be associated with a target group that includes the IP address of the firewall
appliance, directing traffic to it before reaching the web servers.
Option B (Application Load Balancer) is not suitable for this scenario as it operates at the application layer (Layer 7) and does not provide direct
access to the IP packets for inspection.
Option C (Transit Gateway) and option D (Gateway Load Balancer) introduce additional complexity and overhead compared to using an NLB. They
are not necessary for achieving the requirement of inspecting traffic to the web application before reaching the web servers.
upvoted 3 times
4 days, 7 hours ago
best answer.well explained
upvoted 1 times
2 weeks, 6 days ago
Selected Answer: D
Answer is D . Use Gateway Load balancer
upvoted 1 times
1 month, 1 week ago
Community vote distribution
D (86%)
8%
Keyword: third-party virtual appliance
Gateway Load Balancer helps you easily deploy, scale, and manage your third-party virtual appliances. It gives you one gateway for distributing
traffic across multiple virtual appliances while scaling them up or down, based on demand. This decreases potential points of failure in your
network and increases availability.
upvoted 2 times
1 month, 2 weeks ago
For packet inspection (layer 3 osi model), you can use Gateway Load Balancer which is a new type of load balancer that operates at layer 3 of the
OSI model.
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: D
Here's why
Traffic enters the service consumer VPC through the internet gateway.
Traffic is sent to the Gateway Load Balancer endpoint, as a result of ingress routing.
Traffic is sent to the Gateway Load Balancer for inspection through the security appliance.
Traffic is sent back to the Gateway Load Balancer endpoint after inspection.
Traffic is sent to the application servers (destination subnet).
https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/getting-started.html
But I ain't completely sure about the least operational overhead.
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: B
This is the answer of ChatGpt:
Option B is the correct solution because the ALB can be used to redirect traffic to the virtual firewall appliance without requiring any changes to the
backend application servers. The ALB can also be configured to send traffic to multiple targets, allowing the architect to perform high availability
and load balancing. This solution is easy to implement and manage and does not require any additional components such as transit gateways or
gateway load balancers.
Option D is not the optimal solution since Gateway Load Balancer (GWLB) is intended for use with virtual appliances in the cloud, such as firewalls
and intrusion prevention systems. However, it adds operational overhead since creating and managing a Gateway Load Balancer requires several
components, including an endpoint group and listener.
upvoted 1 times
2 months ago
Selected Answer: B
In the scenario described, the web servers, application servers, and database servers are all located within the same VPC. Therefore, a Gateway Load
Balancer may not be the most suitable choice for load balancing traffic between them.
Instead, an Application Load Balancer (ALB) would be a better option as it operates at Layer 7 and can inspect traffic at the application layer. This
would allow the virtual firewall to inspect traffic before it reaches the web servers, which is the requirement specified in the scenario.
Overall, while a Gateway Load Balancer can be useful in certain scenarios, it is not the best choice for this particular use case. An Application Load
Balancer is a better option as it provides the necessary features to integrate the web application with the virtual firewall appliance and inspect all
traffic before it reaches the web server.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: D
Keywords:Third-party virtual firewall appliance from AWS Marketplace in an inspection VPC -> only Gateway Load Balancer support it
A: Incorrect - Network Load Balancer don't support to route traffic to third-party virtual firewall appliance.
B: Incorrect - Application Load Balancer don't support to route traffic to third-party virtual firewall appliance.
C: Incorrect - Transit Gateway is use as connect center to connect all VPC, Direct Connect Gateway and VPN Connection. Routes Tables in Trasit
Gateway only limit which VPC can talk to other VPCs.
D: Correct - Gateway Load Balancer support route traffic to third-party virtual firewall appliance in layer 3 that make it different from ALB and NLB.
upvoted 16 times
3 months ago
Selected Answer: D
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward
the packets to the appliance.
This solution meets the requirements with the least operational overhead because Gateway Load Balancers are designed specifically for integrating
and distributing traffic to virtual appliances, such as firewalls, for inspection and processing. The Gateway Load Balancer endpoint ensures that
traffic is sent to the appliance for inspection before reaching the web server, while minimizing the operational complexity.
upvoted 3 times
3 months, 2 weeks ago
Selected Answer: D
Answer is D. The traffic didn't go to the application directly. Rather, it needs to go though the inspection VPC which holds the 3rd party
applications.
upvoted 2 times
3 months, 2 weeks ago
Answer is D. https://docs.aws.amazon.com/es_es/elasticloadbalancing/latest/gateway/getting-started.html
upvoted 2 times
3 months, 2 weeks ago
D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward
the packets to the appliance.
A Gateway Load Balancer can inspect traffic before forwarding it to a virtual appliance for additional processing. The solution will not require
changing the existing architecture and will have the least amount of operational overhead. The appliance can be configured with a specific IP
interface to accept IP packets. The Gateway Load Balancer can be configured with an endpoint to route incoming packets to the appliance. The
solution ensures all traffic to the web application is inspected before it reaches the web server.
upvoted 2 times
4 months ago
Selected Answer: D
Gateway Load Balancer helps you easily deploy, scale, and manage your third-party virtual appliances. It gives you one gateway for distributing
traffic across multiple virtual appliances while scaling them up or down, based on demand. This decreases potential points of failure in your
network and increases availability.
upvoted 2 times
5 months, 1 week ago
Gateway Load Balancer helps you easily deploy, scale, and manage your third-party virtual appliances. It gives you one gateway for distributing
traffic across multiple virtual appliances while scaling them up or down, based on demand.
upvoted 2 times
Topic 1
Question #20
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is
stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modi cations to the cloned data must not affect the
production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
B. Con gure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the
production EBS volumes to the EC2 instances in the test environment.
C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances
in the test environment before restoring the volumes from the production EBS snapshots.
D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the
snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
Correct Answer:
D
Highly Voted
8 months ago
Selected Answer: D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the
latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver all
of their provisioned performance.
upvoted 18 times
Highly Voted
2 months, 3 weeks ago
Selected Answer: D
Keywords:
- Modifications to the cloned data must not affect the production environment.
- Minimize the time that is required to clone the production data into the test environment.
A: Incorrect - we can do this But it is not minimize the time as requirement.
B: Incorrect - This approach use same EBS volumes for produciton and test. If we modify test then it will be affected prodution environment.
C: Incorrect - EBS snapshot will create new EBS volumes. It can not restore from existing volumes.
D: Correct - Turn on the EBS fast snapshot restore feature on the EBS snapshots -> no latency on first use
upvoted 5 times
Most Recent
1 week, 3 days ago
Selected Answer: D
Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into
new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.
Enabling the EBS fast snapshot restore feature allows you to restore EBS snapshots into new EBS volumes almost instantly, without needing to wait
for the data to be fully copied from the snapshot. This significantly reduces the time required to clone the production data.
By taking EBS snapshots of the production EBS volumes and restoring them into new EBS volumes in the test environment, you can ensure that the
cloned data is separate and does not affect the production environment. Attaching the new EBS volumes to the EC2 instances in the test
environment allows you to access the cloned data.
upvoted 2 times
1 week, 6 days ago
Selected Answer: D
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the
latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver all
of their provisioned performance.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
D is correct
upvoted 1 times
Community vote distribution
D (95%)
5%
1 month, 2 weeks ago
You can use EBS Fast Snapshot restore feature to restore EBS snapshots to a new EBS volume with minimal downtime.
upvoted 1 times
1 month, 2 weeks ago
ANSWER - C
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Key words: minimize the time
upvoted 1 times
4 months ago
Selected Answer: D
The EBS fast snapshot restore feature allows you to restore EBS snapshots to new EBS volumes with minimal downtime. This is particularly useful
when you need to restore large volumes or when you need to restore a volume to an EC2 instance in a different Availability Zone. When you
enable the fast snapshot restore feature, the EBS volume is restored from the snapshot in the shortest amount of time possible, typically within a
few minutes.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
Option A is correct because the question stated that the software that will access the test environment needs High I/O performance which is the
core feature of instance store. The only risk for instance store its lost when the EC2 that it is attached to is terminated, however, this is a test
environment, long term durability may not be required. Option C is not correct because it mentioned creating a new EBS and restoring the
snapshot. The snap shot can be restored without creating a new EBS. It did not satisfy the minimum overhead requirement
upvoted 3 times
4 months, 2 weeks ago
Selected Answer: D
D. They are all viable solutions, however EBS fast snapshot will increase the speed as the question does ask for minimal time and not about cost,
automation, minimum overheads etc.
upvoted 1 times
5 months, 2 weeks ago
C is correct
Option A, restoring EBS snapshots onto EC2 instance store volumes is not correct, because EC2 Instance store volumes are not as durable as EBS
volumes, it may not guarantee the data durability and availability.
Option B, using the EBS Multi-Attach feature is not correct, because it would still need to detach and reattach the volumes, and it will cause the
data unavailability.
Option D, using the EBS fast snapshot restore feature is not correct, it would still require to create new volumes and attach them to the instances,
and it does not guarantee the data ready for use as soon as the restore process completes.
upvoted 2 times
5 months, 1 week ago
Option B is wrong because Multi-Attach (which isn't available for all instance types) allows attaching the SAME EBS volume to multiple EC2
instances, which would mean that modifications in the test environment would also modify production data.
Option D is correct, the data IS ready for use as soon as the restore process completes. It ensures that the I/O performance remains consistent
even when reading blocks for the first time.
Option C is incorrect as it's saying you're creating new instances with completely new volumes and THEN restoring the EBS snapshots. Creating
new, empty volumes is unnecessary. Just restore them from the EBS snapshot.
upvoted 1 times
5 months, 2 weeks ago
C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the
test environment before restoring the volumes from the production EBS snapshots.
Take EBS snapshots of the production EBS volumes, which are point-in-time copies of the data.
Create and initialize new EBS volumes in the test environment.
Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots. This will
allow the data to be ready for use as soon as the restore process completes, and it ensures that the software that accesses the data will have
consistently high I/O performance.
upvoted 1 times
5 months, 1 week ago
The EBS fast snapshot restore feature is the one that gives you consistently high I/O performance.
From the AWS docs:
"Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the
latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver
all of their provisioned performance."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
The EBS fast snapshot restore feature allows you to restore EBS snapshots to new EBS volumes with minimal downtime. This is particularly useful
when you need to restore large volumes or when you need to restore a volume to an EC2 instance in a different Availability Zone. When you
enable the fast snapshot restore feature, the EBS volume is restored from the snapshot in the shortest amount of time possible, typically within a
few minutes.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: D
Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into
new EBS volumes.
upvoted 2 times
6 months ago
Selected Answer: D
The solution that will meet these requirements is D: Take EBS snapshots of the production EBS volumes, turn on the EBS fast snapshot restore
feature on the EBS snapshots, and restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test
environment.
EBS fast snapshot restore is a feature that enables you to restore an EBS snapshot to a new EBS volume within seconds, providing consistently high
I/O performance. By taking EBS snapshots of the production EBS volumes, turning on the EBS fast snapshot restore feature, and restoring the
snapshots into new EBS volumes, you can quickly clone the production data into the test environment and minimize the time required to do so.
The new EBS volumes can be attached to EC2 instances in the test environment to provide access to the cloned data.
upvoted 2 times
6 months ago
Option A is incorrect because restoring EBS snapshots onto EC2 instance store volumes will not provide consistently high I/O performance.
Option B is incorrect because using the EBS Multi-Attach feature to attach the production EBS volumes to the EC2 instances in the test
environment could potentially affect the production environment and is not a recommended practice.
Option C is incorrect because creating and initializing new EBS volumes and restoring the production data onto them can take longer than
restoring the data from an EBS snapshot with the EBS fast snapshot restore feature.
upvoted 5 times
6 months ago
Selected Answer: D
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the
latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver all
of their provisioned performance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 1 times
Topic 1
Question #21
An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24
hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the
distributions. Store the order data in Amazon S3.
B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application
Load Balancer (ALB) to distribute the website tra c. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL.
C. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the
Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in tra c. Store the data in Amazon RDS for
MySQL.
D. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin.
Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
D because all of the components are infinitely scalable
dynamoDB, API Gateway, Lambda, and of course s3+cloudfront
upvoted 22 times
Highly Voted
6 months ago
Selected Answer: D
The solution that will meet these requirements with the least operational overhead is D: Use an Amazon S3 bucket to host the website's static
content, deploy an Amazon CloudFront distribution, set the S3 bucket as the origin, and use Amazon API Gateway and AWS Lambda functions for
the backend APIs. Store the data in Amazon DynamoDB.
Using Amazon S3 to host static content and Amazon CloudFront to distribute the content can provide high performance and scale for websites
with millions of requests each hour. Amazon API Gateway and AWS Lambda can be used to build scalable and highly available backend APIs to
support the website, and Amazon DynamoDB can be used to store the data. This solution requires minimal operational overhead as it leverages
fully managed services that automatically scale to meet demand.
upvoted 10 times
6 months ago
Option A is incorrect because using multiple S3 buckets to host the full website would not provide the required performance and scale for
millions of requests each hour with millisecond latency.
Option B is incorrect because deploying the full website on EC2 instances and using an Application Load Balancer (ALB) and an RDS database
would require more operational overhead to maintain and scale the infrastructure.
Option C is incorrect because while deploying the application in containers and hosting them on Amazon Elastic Kubernetes Service (EKS) can
provide high performance and scale, it would require more operational overhead to maintain and scale the infrastructure compared to using
fully managed services like S3 and CloudFront.
upvoted 6 times
Most Recent
1 week, 3 days ago
Selected Answer: D
Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use
Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.
This solution leverages the scalability, low latency, and operational ease provided by AWS services.
This solution minimizes operational overhead because it leverages managed services, eliminating the need for manual scaling or management of
infrastructure. It also provides the required scalability and low-latency response times to handle peak-hour traffic effectively.
Options A, B, and C involve more operational overhead and management responsibilities, such as managing EC2 instances, Auto Scaling groups,
RDS for MySQL, containers, and Kubernetes clusters. These options require more manual configuration and maintenance compared to the
serverless and managed services approach provided by option D.
upvoted 2 times
2 weeks, 6 days ago
Community vote distribution
D (100%)
Selected Answer: D
D is correct
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
D is correct
upvoted 1 times
2 months ago
Selected Answer: D
ans: D
keywords: only one product on sale -- means static content
millions of requests each hour with millisecond latency -- dynamoDB
LEAST operational overhead -- choose serverless architecture -- lambda/ API Gateway that handle millions of request in one go with cost effective
manner
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: D
Keywords:
- Each day will feature exactly one product on sale for a period of 24 hours
- Handle millions of requests each hour with millisecond latency during peak hours.
- LEAST operational overhead
A: Incorrect - We cannot store all the data to S3 because our data is dynamic (Each day will feature exactly one product on sale for a period of 24
hours)
B: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Auto Scaling groups and RDS for MySQL
need time to scale cannot scale immedidately.
C: Incorrect - We don't have cache to improve performance (one product on sale for a period of 24 hours). Kubernetes Cluster Autoscaler can scale
better than Auto Scaling groups but it also need time to scale.
D: Correct - DynamoDB, S3, CloudFront, API Gateway are managed servers and they are highly scalable. CloudFront can cache static and dynamic
data.
upvoted 7 times
2 months, 3 weeks ago
Selected Answer: D
Option D uses Amazon S3 to host the website's static content, which requires no servers to be provisioned or managed. Additionally, Amazon
CloudFront can be used to improve the latency and scalability of the website. The backend APIs can be built using Amazon API Gateway and AWS
Lambda, which can handle millions of requests with low operational overhead. Amazon DynamoDB can be used to store order data, which can
scale to handle high request volumes with low latency.
upvoted 1 times
3 months ago
Selected Answer: D
the most important key work is millisecond latency. only Dynamo DB can provide in this scale.
obviously, S3, Lambda, Cloud front, etc has built in scaling
upvoted 2 times
3 months, 1 week ago
Selected Answer: D
Answer is D. All services proposed are managed services and auto scalable.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
high I/O = DynamoDB
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
millisecond latency --> DynamoDB
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: D
only all services in D are auto-scaling
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: D
Serverless technologies are better options
upvoted 1 times
7 months, 2 weeks ago
Why not B? Application load balancer can accept millions of request/hr?
upvoted 2 times
7 months, 2 weeks ago
For me, the keyword was millisecond latency. Option B suggests RDS as the database, but Option D is DynamoDB.
DynamoDB - Fast, flexible NoSQL database service for single-digit millisecond performance at any scale
upvoted 2 times
7 months ago
Yes, and also LEAST operational overhead. Scaling the application on EC2 instance is hard work require the very good architect.
upvoted 1 times
6 months, 4 weeks ago
And scaling takes time, so Auto Scaling groups cannot react instantly to a massive surge in demand
upvoted 2 times
7 months, 2 weeks ago
D is the correct answer due to milliseconds latency which will involve cloud front.
upvoted 2 times
Topic 1
Question #22
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media les must be resilient to
the loss of an Availability Zone. Some les are accessed frequently while other les are rarely accessed in an unpredictable pattern. The solutions
architect must minimize the costs of storing and retrieving the media les.
Which storage option meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
"unpredictable pattern" - always go for Intelligent Tiering of S3
It also meets the resiliency requirement: "S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible
Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS
Region" https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html
upvoted 22 times
Highly Voted
6 months ago
Selected Answer: B
The storage option that meets these requirements is B: S3 Intelligent-Tiering.
Amazon S3 Intelligent Tiering is a storage class that automatically moves data to the most cost-effective storage tier based on access patterns. It
can store objects in two access tiers: the frequent access tier and the infrequent access tier. The frequent access tier is optimized for frequently
accessed objects and is charged at the same rate as S3 Standard. The infrequent access tier is optimized for objects that are not accessed
frequently and are charged at a lower rate than S3 Standard.
S3 Intelligent Tiering is a good choice for storing media files that are accessed frequently and infrequently in an unpredictable pattern because it
automatically moves data to the most cost-effective storage tier based on access patterns, minimizing storage and retrieval costs. It is also resilient
to the loss of an Availability Zone because it stores objects in multiple Availability Zones within a region.
upvoted 7 times
6 months ago
Option A, S3 Standard, is not a good choice because it does not offer the cost optimization of S3 Intelligent-Tiering.
Option C, S3 Standard-Infrequent Access (S3 Standard-IA), is not a good choice because it is optimized for infrequently accessed objects and
does not offer the cost optimization of S3 Intelligent-Tiering.
Option D, S3 One Zone-Infrequent Access (S3 One Zone-IA), is not a good choice because it is not resilient to the loss of an Availability Zone. It
stores objects in a single Availability Zone, making it less durable than other storage classes.
upvoted 4 times
Most Recent
1 week, 3 days ago
Selected Answer: B
S3 Intelligent-Tiering is designed to optimize costs by automatically moving objects between two access tiers: frequent access and infrequent
access. It uses machine learning algorithms to analyze access patterns and determine the most appropriate tier for each object.
In the given scenario, where some media files are accessed frequently while others are rarely accessed in an unpredictable pattern, S3 Intelligent-
Tiering can be a suitable choice. It automatically adjusts the storage tier based on the access patterns, ensuring that frequently accessed files
remain in the frequent access tier for fast retrieval, while rarely accessed files are moved to the infrequent access tier for cost savings.
Compared to S3 Standard-IA, S3 Intelligent-Tiering provides more granular cost optimization and may be more suitable if the access patterns of
the media files fluctuate over time. However, it's worth noting that S3 Intelligent-Tiering may have slightly higher storage costs compared to S3
Standard-IA due to the added flexibility and automation it offers.
upvoted 2 times
1 month, 2 weeks ago
B - for unpredictable patterns use intelligent tiering
upvoted 1 times
1 month, 4 weeks ago
Community vote distribution
B (100%)
B - "UNPREDICTABLE pattern" is the key
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Keywords:
- Must be resilient to the loss of an Availability Zone.
- files are accessed FREQUENTLY while other files are RARELY accessed in an UNPREDICTABLE pattern.
A - Incorrect: S3 Standard is not cost effective for rarely access files
B - Correct: S3 Intelligent-Tiering is good for file which frequently or rarely accessed in an unpredictable pattern. Intelligent-Tiering will help us
analyze the pattern and move rarely access files to storage which has lower cost.
C - Incorrect: Standard-Infrequent Access is not cost effective for frequently access files
D - Incorrect: One Zone-Infrequent Access is not resilient to the loss of an Availability Zone
upvoted 3 times
2 months, 3 weeks ago
Selected Answer: B
Key words: in an unpredictable pattern.
upvoted 1 times
3 months, 1 week ago
Selected Answer: B
S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or
retention period
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: B
S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive are all
designed to sustain data in the event of the loss of an entire Amazon S3 Availability Zone.
upvoted 1 times
5 months ago
Selected Answer: B
B is correct
upvoted 1 times
5 months, 2 weeks ago
C. S3 Standard-Infrequent Access (S3 Standard-IA)
S3 Standard-IA is designed for infrequently accessed data, which is a good fit for the media files that are rarely accessed in an unpredictable
pattern. S3 Standard-IA is also cross-Region replicated, providing resilience to the loss of an Availability Zone. Additionally, S3 Standard-IA has a
lower storage and retrieval cost compared to S3 Standard and S3 Intelligent-Tiering, which makes it a cost-effective option for storing infrequently
accessed data.
upvoted 1 times
5 months, 3 weeks ago
B is clearly
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
unpredictable pattern = Intelligent Tiering
upvoted 2 times
6 months, 3 weeks ago
Selected Answer: B
S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive are all
designed to sustain data in the event of the loss of an entire Amazon S3 Availability Zone.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
Since there are files which will be accessed frequently and others infrequently
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
"unpredictable pattern" - remember the keyword and always go for Intelligent Tiering of S3
upvoted 2 times
7 months, 1 week ago
B is correct
upvoted 1 times
Topic 1
Question #23
A company is storing backup les by using Amazon S3 Standard storage. The les are accessed frequently for 1 month. However, the les are not
accessed after 1 month. The company must keep the les inde nitely.
Which storage solution will meet these requirements MOST cost-effectively?
A. Con gure S3 Intelligent-Tiering to automatically migrate objects.
B. Create an S3 Lifecycle con guration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
C. Create an S3 Lifecycle con guration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1
month.
D. Create an S3 Lifecycle con guration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1
month.
Correct Answer:
B
Highly Voted
6 months ago
Selected Answer: B
The storage solution that will meet these requirements most cost-effectively is B: Create an S3 Lifecycle configuration to transition objects from S3
Standard to S3 Glacier Deep Archive after 1 month.
Amazon S3 Glacier Deep Archive is a secure, durable, and extremely low-cost Amazon S3 storage class for long-term retention of data that is rarely
accessed and for which retrieval times of several hours are acceptable. It is the lowest-cost storage option in Amazon S3, making it a cost-effective
choice for storing backup files that are not accessed after 1 month.
You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. This will
minimize the storage costs for the backup files that are not accessed frequently.
upvoted 7 times
6 months ago
Option A, configuring S3 Intelligent-Tiering to automatically migrate objects, is not a good choice because it is not designed for long-term
storage and does not offer the cost benefits of S3 Glacier Deep Archive.
Option C, transitioning objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month, is not a good choice because
it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.
Option D, transitioning objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month, is not a good choice
because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.
upvoted 2 times
5 months, 2 weeks ago
Also S3 Standard-IA & One Zone-IA stores the data for max of 30 days and not indefinitely.
upvoted 2 times
Highly Voted
8 months, 2 weeks ago
B: Transition to Glacier deep archive for cost efficiency
upvoted 6 times
Most Recent
1 week, 3 days ago
Selected Answer: B
S3 Glacier Deep Archive is designed for long-term archival storage with very low storage costs. It offers the lowest storage prices among the
storage classes in Amazon S3. However, it's important to note that accessing data from S3 Glacier Deep Archive has a significant retrieval time,
ranging from several minutes to hours, which may not be suitable if you require immediate access to the backup files.
If the files need to be accessed frequently within the first month but not after that, transitioning them to S3 Glacier Deep Archive using an S3
Lifecycle configuration can provide cost savings. However, keep in mind that retrieving the files from S3 Glacier Deep Archive will have a significant
time delay.
upvoted 2 times
1 month, 1 week ago
Selected Answer: B
B is the correct answer
upvoted 1 times
1 month, 1 week ago
Community vote distribution
B (96%)
4%
Selected Answer: B
B is correct answer
upvoted 1 times
1 month, 4 weeks ago
Transition to Glacier storage for cost efficient and can be queries in 5-7 hours time
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Keywords:
- The files are accessed frequently for 1 month.
- Files are NOT accessed after 1 month.
A: Incorrect - We know the pattern (accessed frequently for 1 month, NOT accessed after 1 month) so we can configure it manually to make the
cost reduce as much as possible.
B: Correct - Glacier Deep Archive is the most cost-effective for file which rarely use
C: Incorrect - Standard-Infrequent Access good for in Infrequent Access but not good for rarely(never) use.
D: Incorrect - One Zone-Infrequent Access can reduce more cost compare to Standard-Infrequent Access but it is not the best way compare to
Glacier Deep Archive.
upvoted 2 times
3 months, 4 weeks ago
The answer is B. "S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for
data that may be accessed once or twice in a year." See here: https://aws.amazon.com/s3/storage-classes/
upvoted 1 times
4 months ago
Selected Answer: B
Files are only required to be kept up to 7 years for businesses to Deep archive is the most cost optimal as well as useful in this scenario.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Glacier deep archive = lowest cost (accessed once or twice a year)
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Correct answer: B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Transition to Glacier is cost effective.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
B is the answer.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
Amazon S3 Glacier Deep Archive – for long term storage: Minimum storage duration of 180 days
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
Since deep archive is the cheapest storage option
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
Deep archive is cheaper
upvoted 2 times
6 months, 4 weeks ago
i thought it can only go to deep archive after 90 days?
upvoted 2 times
4 months, 2 weeks ago
Nah pretty sure its minimum storage time 180 days. Meaning you can't remove it from glacier storage for half a year, but you can put it into
glacier whenever you want.
upvoted 1 times
Topic 1
Question #24
A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types
for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth
analysis to identify the root cause of the vertical scaling.
How should the solutions architect generate the information with the LEAST operational overhead?
A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.
B. Use Cost Explorer's granular ltering feature to perform an in-depth analysis of EC2 costs based on instance types.
C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.
D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a
source to generate an interactive graph based on instance types.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/68306-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 25 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
The requested result is a graph, so...
A - can't be as the result is a report
B - can't be as it is limited to 14 days visibility and the graph has to cover 2 months
C - seems to provide graphs and the best option available, as...
D - could provide graphs, BUT involves operational overhead, which has been requested to be minimised.
upvoted 16 times
4 months, 2 weeks ago
14 days? Fam, you ever logged into the console?
upvoted 6 times
7 months ago
Cost Explorer, AWS prepares the data about your costs for the current month and the last 12 months: https://aws.amazon.com/aws-cost-
management/aws-cost-explorer/
upvoted 12 times
4 months, 3 weeks ago
B. This is correct because there is no limit of 14 days. Quoted from Amazon "AWS prepares the data about your costs for the current month and
the last 12 months, and then calculates the forecast for the next 12 months." (https://aws.amazon.com/aws-cost-management/aws-cost-
explorer/).
upvoted 5 times
8 months, 1 week ago
12 months data visible on Cost Explorer.
upvoted 9 times
Most Recent
2 days, 19 hours ago
Selected Answer: B
Answer - B
https://tutorialsdojo.com/aws-billing-and-cost-management/
Other default reports available are:
The EC2 Monthly Cost and Usage report lets you view all of your AWS costs over the past two months, as well as your current month-to-date costs.
upvoted 1 times
1 week, 3 days ago
Selected Answer: D
By configuring AWS Cost and Usage Reports, the architect can generate detailed reports containing cost and usage information for various AWS
services, including EC2. The reports can be automatically delivered to an Amazon S3 bucket, providing a centralized location for storing cost data.
To visualize and analyze the EC2 costs based on instance types, the architect can use Amazon QuickSight, a business intelligence tool offered by
AWS. QuickSight can directly access data stored in Amazon S3 and generate interactive graphs, charts, and dashboards for detailed analysis. By
connecting QuickSight to the S3 bucket containing the cost reports, the architect can easily create a graph comparing the EC2 costs over the last 2
Community vote distribution
B (65%)
C (27%)
9%
months based on instance types.
This approach minimizes operational overhead by leveraging AWS services (Cost and Usage Reports, Amazon S3, and QuickSight) to automate data
retrieval, storage, and visualization, allowing for efficient analysis of EC2 costs without the need for manual data gathering and processing.
upvoted 2 times
2 weeks, 1 day ago
Selected Answer: B
B is the answer
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
B. This is correct because there is no limit of 14 days
upvoted 2 times
1 month, 2 weeks ago
Questions says "in-depth analysis"
Answer keyword - Granular (monthly,weekly,daily,hourly, min,sec) so use AWS Cost Explorer
upvoted 1 times
1 month, 3 weeks ago
D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source
to generate an interactive graph based on instance types.
AWS Cost and Usage Reports provide a detailed report of your AWS costs, and can be configured to send to an S3 bucket. Using Amazon
QuickSight with S3 as a source, the solutions architect can create an interactive graph to compare the last 2 months of EC2 costs based on instance
types. This solution provides the most flexibility and customization in terms of generating the report and analyzing the data. AWS Budgets and
Cost Explorer are useful tools, but they may not provide the same level of detail and customization for this particular scenario. Using the graphs
from the AWS Billing and Cost Management dashboard may not provide enough detail to identify the root cause of the vertical scaling.
upvoted 1 times
1 month, 4 weeks ago
Billing and Cost Management console is the place to look for the details
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Keywords: Create a graph comparing the LAST 2 MONTHS of EC2 costs and perform an IN-DEPTH analysis.
A: Incorrect - AWS Budgets just report general infomation and it's reported from the time we create it. We can not see report from the last 2
months
B: Correct - Cost Explorer have an IN-DEAPTH analysis with monthly/hour granular
C: Incorrect - it's only show the general info in the AWS Billing and Cost Management dashboard
D: Incorrect - we can do this way but it's required more than Cost Explorer
upvoted 6 times
2 months, 4 weeks ago
Selected Answer: B
The correct answer is: B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
AWS Cost Explorer is a tool that helps you analyze your AWS costs. You can use Cost Explorer to view your costs by service, by region, and by
instance type. You can also use Cost Explorer to identify cost trends and to compare your costs to those of other AWS customers.
The granular filtering feature in Cost Explorer allows you to filter your data by specific attributes. In this case, you can filter your data by instance
type. This will allow you to see the costs of each instance type over the last 2 months.
Once you have identified the instance types that are causing the increase in costs, you can take steps to reduce those costs. For example, you can
downsize the instance types or switch to a different instance type.
upvoted 1 times
4 months, 1 week ago
Feel like all the answers have a little bit ambiguous, so here is my breaking them down:
AWS Billing and Cost Management provides a summarised view of spending i.e. what you spent so far this month, and the predicted end of month
bill, this is quite static and gives you a high level overview of spending. In addition you can configure your billing details from here. All of these
features are free to use with no charge for accessing the interface.
AWS Cost explorer on the other hand is a paid service ($0.01 per query). By using cost explorer you can dig down into the finer details of
expenditure, such as on a region, service, usage type or even tag based level. Using this you can identify costs by targeting your query to be
specific enough to identify these charges. Additionally you can make use of hourly billing to get the most accurate upto date billing
upvoted 5 times
4 months, 1 week ago
Selected Answer: B
B. is correct.
C. there is not such thing as "the AWS Billing and Cost Management dashboard"
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
AWS Cost Explorer would be the easiest way to graph this data. Cost Explorer can be accessed easily and has features for filtering billing data and
graphing across relevant time periods.
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
upvoted 3 times
5 months ago
most comprehensive cost tool --B
upvoted 1 times
5 months ago
Correct Answer is B:
The solutions architect can use the AWS Cost Explorer to generate a graph comparing the last 2 months of EC2 costs. This tool allows the user to
view and analyze cost and usage data, and can be used to identify the root cause of the vertical scaling. Additionally, the solutions architect can use
CloudWatch metrics to monitor the resource usage of the specific instances in question and identify any abnormal behavior. This solution would
have minimal operational overhead as it utilizes built-in AWS services that do not require additional setup or maintenance.
upvoted 2 times
5 months ago
The solutions architect can use the AWS Cost Explorer to generate a graph comparing the last 2 months of EC2 costs. This tool allows the user to
view and analyze cost and usage data, and can be used to identify the root cause of the vertical scaling. Additionally, the solutions architect can use
CloudWatch metrics to monitor the resource usage of the specific instances in question and identify any abnormal behavior. This solution would
have minimal operational overhead as it utilizes built-in AWS services that do not require additional setup or maintenance.
upvoted 1 times
Topic 1
Question #25
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to
store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas signi cantly to handle the high volumes of data that the
company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the
con guration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native
Java Database Connectivity (JDBC) drivers.
B. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point
the existing DynamoDB API calls at the DAX cluster.
C. Set up two Lambda functions. Con gure one function to receive the information. Con gure the other function to load the information into
the database. Integrate the Lambda functions by using Amazon Simple Noti cation Service (Amazon SNS).
D. Set up two Lambda functions. Con gure one function to receive the information. Con gure the other function to load the information into
the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
A - refactoring can be a solution, BUT requires a LOT of effort - not the answer
B - DynamoDB is NoSQL and Aurora is SQL, so it requires a DB migration... again a LOT of effort, so no the answer
C and D are similar in structure, but...
C uses SNS, which would notify the 2nd Lambda function... provoking the same bottleneck... not the solution
D uses SQS, so the 2nd lambda function can go to the queue when responsive to keep with the DB load process.
Usually the app decoupling helps with the performance improvement by distributing load. In this case, the bottleneck is solved by uses queues... so
D is the answer.
upvoted 50 times
Highly Voted
2 months, 3 weeks ago
Selected Answer: D
Keywords:
- Company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database.
- Improve scalability and minimize the configuration effort.
A: Incorrect - Lambda is Serverless and automatically scale - EC2 instance we have to create load balancer, auto scaling group,.. a lot of things.
using native Java Database Connectivity (JDBC) drivers don't improve the performance.
B: Incorrect - a lot of things to changes and DynamoDB Accelerator use for cache(read) not for write.
C: Incorrect - SNS is use for send notification (e-mail, SMS).
D: Correct - with SQS we can scale application well by queuing the data.
upvoted 9 times
Most Recent
1 week, 3 days ago
Selected Answer: D
Option D, setting up two Lambda functions and integrating them using an SQS, would be the most suitable solution to improve scalability and
minimize configuration effort in this scenario.
By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database, you can
independently scale and optimize each function based on their specific requirements. This approach allows for more efficient resource allocation
and reduces the potential impact of high volumes of data on the overall system.
Integrating the Lambda functions using an SQS adds another layer of scalability and reliability. The receiving function can push the information to
the SQS, and the loading function can retrieve messages from the queue and process them independently. This asynchronous decoupling ensures
that the receiving function can handle high volumes of incoming requests without overwhelming the loading function. Additionally, SQS provides
built-in retries and guarantees message durability, ensuring that no data is lost during processing.
upvoted 2 times
1 week, 6 days ago
Selected Answer: D
D is correct, SQS can queue data
upvoted 1 times
Community vote distribution
D (100%)
3 weeks, 5 days ago
Selected Answer: D
To improve scalability and minimize the configuration effort. Solutions architect can choose option D.
upvoted 1 times
1 month, 2 weeks ago
To improve scalability and minimize configuration efforts you can set up 2 lambda functions, one to receive the other to load. Then integrate the
lambda functions using SQS.
upvoted 1 times
2 months, 1 week ago
Is the question wrong? Amazon Aurora use it's own DB not PostgreSQL, you need to provision an rds instance for that..
upvoted 1 times
2 months, 1 week ago
The question ask you to improve scalability and minimize the configuration effort. While SNS is a fair answer, SQS is better. "SQS scales elastically,
and there is no limit to the number of messages per queue." See https://aws.amazon.com/blogs/compute/choosing-between-messaging-services-
for-serverless-applications/.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: D
o improve scalability and minimize configuration effort, the recommended solution is to use an event-driven architecture with AWS Lambda
functions. This will allow the company to handle high volumes of data without worrying about scaling the infrastructure.
Option C and D both propose an event-driven architecture using Lambda functions, but option D is better suited for this use case because it uses
an Amazon SQS queue to decouple the receiving and loading of information into the database. This will provide better fault tolerance and
scalability, as messages can be stored in the queue until they are processed by the second Lambda function. In contrast, using SNS for this use case
might cause some events to be missed, as it only guarantees the delivery of messages to subscribers, not to the Lambda function.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: D
By using two Lambda functions, you can separate the tasks of receiving the information and loading the information into the database. This will
allow you to scale each function independently, improving scalability.
upvoted 1 times
6 months ago
Selected Answer: D
The solution that will meet these requirements is D: Set up two Lambda functions. Configure one function to receive the information. Configure the
other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS)
queue.
Using separate Lambda functions for receiving and loading the information can help improve scalability and minimize the configuration effort. By
using an Amazon SQS queue to integrate the Lambda functions, you can decouple the functions and allow them to scale independently. This can
help reduce the burden on the receiving function, improving its performance and scalability.
upvoted 3 times
6 months ago
Option A, refactoring the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances and connecting the database using
native JDBC drivers, is not a good choice because it would require significant effort to redesign and refactor the code and would not improve
scalability.
Option B, changing the platform from Aurora to Amazon DynamoDB and provisioning a DynamoDB Accelerator (DAX) cluster, is not a good
choice because it would require significant effort to redesign and refactor the code and would not improve scalability.
Option C, integrating the Lambda functions using Amazon SNS, is not a good choice because it does not provide the decoupling and scaling
benefits of using an Amazon SQS queue.
upvoted 2 times
6 months ago
It's D (100%)
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
improve scalability = SQS
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
The solution that will meet these requirements is D: Set up two Lambda functions. Configure one function to receive the information. Configure the
other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS)
queue.
Using separate Lambda functions for receiving and loading the information can help improve scalability and minimize the configuration effort. By
using an Amazon SQS queue to integrate the Lambda functions, you can decouple the functions and allow them to scale independently. This can
help reduce the burden on the receiving function, improving its performance and scalability.
upvoted 2 times
6 months, 1 week ago
Option A, refactoring the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances and connecting the database using
native JDBC drivers, is not a good choice because it would require significant effort to redesign and refactor the code and would not improve
scalability.
Option B, changing the platform from Aurora to Amazon DynamoDB and provisioning a DynamoDB Accelerator (DAX) cluster, is not a good
choice because it would require significant effort to redesign and refactor the code and would not improve scalability.
Option C, integrating the Lambda functions using Amazon SNS, is not a good choice because it does not provide the decoupling and scaling
benefits of using an Amazon SQS queue.
upvoted 2 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: D
Two single responsibility functions offer a better solution.
upvoted 2 times
7 months, 1 week ago
D. Keyword is to handle load which will be taking care of by SQS.
upvoted 2 times
Topic 1
Question #26
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized con guration changes.
What should a solutions architect do to accomplish this goal?
A. Turn on AWS Con g with the appropriate rules.
B. Turn on AWS Trusted Advisor with the appropriate checks.
C. Turn on Amazon Inspector with the appropriate assessment template.
D. Turn on Amazon S3 server access logging. Con gure Amazon EventBridge (Amazon Cloud Watch Events).
Correct Answer:
A
Highly Voted
6 months ago
Selected Answer: A
The solution that will accomplish this goal is A: Turn on AWS Config with the appropriate rules.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config to
monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate rules, you
can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 18 times
6 months ago
AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not monitor or
record changes to the configuration of your S3 buckets.
Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to assess
the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.
Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes to
your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 11 times
Highly Voted
7 months, 3 weeks ago
Configuration changes= AWS Config
upvoted 18 times
Most Recent
1 week, 3 days ago
Selected Answer: A
AWS Config is a service that provides a detailed view of the configuration of AWS resources in your account. By enabling AWS Config, you can
capture configuration changes and maintain a record of resource configurations over time. It allows you to define rules that check for compliance
with desired configurations and can generate alerts or automated actions when unauthorized changes occur.
To accomplish the goal of preventing unauthorized configuration changes in Amazon S3 buckets, you can configure AWS Config rules specifically
for S3 bucket configurations. These rules can check for a variety of conditions, such as ensuring that encryption is enabled, access control policies
are correctly configured, and public access is restricted.
While options B, C, and D offer valuable services for various aspects of AWS deployment, they are not specifically focused on preventing
unauthorized configuration changes in Amazon S3 buckets as effectively as enabling AWS Config.
upvoted 2 times
1 month, 2 weeks ago
Don't be mistaken in thinking that it's Server access logs because that's for detailed records for requests made to S3. It's AWS Config because it
records configuration changes.
upvoted 1 times
1 month, 4 weeks ago
AWS truseted Adviser is for providing recommendation only.
For any configuration use AWS config
Inspecter is for scanning for any software vulnerabilities and unintended network exposure
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
To accomplish the goal of ensuring that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should turn on
AWS Config with the appropriate rules. AWS Config enables continuous monitoring and recording of AWS resource configurations, including S3
buckets. By turning on AWS Config with the appropriate rules, the solutions architect can be notified of any unauthorized changes made to the S3
Community vote distribution
A (96%)
4%
bucket configurations, allowing for prompt corrective action. Options B, C, and D are not directly related to monitoring and preventing
unauthorized configuration changes to Amazon S3 buckets.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Key words:configuration changes
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
Option A is the correct solution. AWS Config is a service that allows you to monitor and record changes to your AWS resources over time. You can
use AWS Config to track changes to Amazon S3 buckets and their configuration settings, and set up rules to identify any unauthorized
configuration changes. AWS Config can also send notifications through Amazon SNS to alert you when these changes occur.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
aws: A - aws config
upvoted 1 times
4 months, 4 weeks ago
AAAAaaaaaaaaaaaaaaaaa
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
o ensure that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should turn on AWS Config with the
appropriate rules.
AWS Config is a service that provides you with a detailed view of the configuration of your AWS resources. It continuously records configuration
changes to your resources and allows you to review, audit, and compare these changes over time. By turning on AWS Config and enabling the
appropriate rules, you can monitor the configuration changes to your Amazon S3 buckets and receive notifications when unauthorized changes are
made.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
unauthorized config changes = aws config
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
The solution that will accomplish this goal is A: Turn on AWS Config with the appropriate rules.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config to
monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate rules, you
can ensure that your S3 buckets do not have unauthorized configuration changes.
upvoted 1 times
6 months, 1 week ago
AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not monitor or
record changes to the configuration of your S3 buckets.
Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to assess
the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.
Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes to
your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: A
AWS Config
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: A
AWS config will monitor config changes
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #27
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company's product
manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide
access to the product manager by following the principle of least privilege.
Which solution will meet these requirements?
A. Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a
shareable link for the dashboard to the product manager.
B. Create an IAM user speci cally for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share
the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
C. Create an IAM user for the company's employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login
credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in
the Dashboards section.
D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP
credentials. On the bastion server, ensure that the browser is con gured to open the dashboard URL with cached AWS credentials that have
appropriate permissions to view the dashboard.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Answere A : https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html
Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates their own
password that they must enter to view the dashboard.
upvoted 50 times
8 months, 2 weeks ago
Thanks for the link! No doubt A is the answer.
upvoted 4 times
1 month, 1 week ago
nope! The principle of least privilege will contradict that B is the correct answer even Chat GPT says its B
upvoted 1 times
Most Recent
1 week, 3 days ago
Selected Answer: A
This solution allows the product manager to access the CloudWatch dashboard without requiring an AWS account or IAM user credentials. By
sharing the dashboard through the CloudWatch console, you can provide direct access to the specific dashboard without granting unnecessary
permissions.
With this approach, the product manager can access the dashboard periodically by simply clicking on the provided link. They will be able to view
the application metrics without the need for an AWS account or IAM user credentials. This ensures that the product manager has the necessary
access while adhering to the principle of least privilege by not granting unnecessary permissions or creating additional IAM users.
upvoted 2 times
4 weeks, 1 day ago
Selected Answer: A
...................................
upvoted 1 times
1 month ago
A is my answer here is why. " To help manage this information access, Amazon CloudWatch has introduced CloudWatch dashboard sharing. This
allows customers to easily and securely share their CloudWatch dashboards with people outside of their organization, in another business unit, or
with those with no access AWS console access"
https://aws.amazon.com/blogs/mt/share-your-amazon-cloudwatch-dashboards-with-anyone-using-aws-single-sign-on/
upvoted 1 times
1 month, 1 week ago
answer is B
Community vote distribution
A (87%)
11%
upvoted 1 times
1 month, 1 week ago
a should be answer
upvoted 1 times
1 month, 3 weeks ago
Is sharing Cloudwatch dashboard a one-time thing or forever?
upvoted 1 times
2 months ago
b.
This solution follows the principle of least privilege by creating a dedicated IAM user for the product manager with only the necessary permissions
to access the CloudWatch dashboard. The CloudWatchReadOnlyAccess AWS managed policy grants read-only access to CloudWatch resources,
which is sufficient for the product manager to view the metrics. The product manager can access the dashboard using the new login credentials
and the browser URL provided. This solution also avoids sharing the dashboard with the product manager, which may not be desirable from a
security perspective. Option A is not recommended because it involves sharing the dashboard, which may not be secure. Option C grants access to
all company employees, which is more permissive than necessary. Option D involves deploying a bastion server, which is not necessary for this use
case and adds complexity to the solution.
upvoted 4 times
1 month, 4 weeks ago
In option A we can specifically share the Dashboard over email, but in case of option B we are assigning the policy CloudWatchReadOnlyAccess
which may allow all dashboards access to the product manager which does not comply with least privilege principal
upvoted 4 times
2 months, 2 weeks ago
Selected Answer: A
Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates their own
password that they must enter to view the dashboard.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
optionA: The solution that will meet the requirement is to share the CloudWatch dashboard from the CloudWatch console with the product
manager. The product manager does not have an AWS account, so creating an IAM user for the product manager would not be the best solution.
Option A provides a shareable link for the dashboard to the product manager and follows the principle of least privilege by only providing access
to the specific dashboard the product manager needs. This is the most appropriate solution for the scenario described.
Option B is not the best solution since it involves creating an IAM user, which is not required, and providing unnecessary permissions to the
product manager.
Option C is not an appropriate solution because the ViewOnlyAccess managed policy does not provide access to CloudWatch resources. Option D
is not an appropriate solution since it involves deploying a bastion server, which is an unnecessary overhead, and requires the product manager to
have access to RDP credentials.
upvoted 3 times
1 month, 1 week ago
Correct, thanks
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
Option B would not be a feasible solution as the product manager does not have an AWS account to use the IAM user login credentials.
Option A would work in this scenario. Sharing the dashboard from the CloudWatch console and providing a shareable link for the dashboard to the
product manager would allow them to access the dashboard without needing an AWS account. This solution follows the principle of least privilege,
as the product manager only has access to the specific dashboard that is shared with them.
upvoted 3 times
3 months ago
A. The product manager does not have an AWS account.
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
Answer is A, sharing credentials under any circumstances is not good.
upvoted 1 times
4 months ago
I will go with B, as ask is for a user ( manager) , not for everyone who gets the link.
The most secure and least privileged solution for providing access to an Amazon CloudWatch dashboard for a user without an AWS account is to
create an IAM user for the product manager with the appropriate permissions. By attaching the CloudWatchReadOnlyAccess policy to the user, the
product manager can access only the read-only activities of Amazon CloudWatch, as per the principle of least privilege. The solutions architect
should then share the login credentials and browser URL of the correct dashboard with the product manager.
Option A is incorrect because it is not secure as it requires sharing the dashboard link, which could lead to unauthorized access.
upvoted 2 times
3 months ago
With option A) sharing can be locked down to a single user as per
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html
"Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates their
own password that they must enter to view the dashboard."
Also, with option A permission list is pretty small:
cloudwatch:GetInsightRuleReport
cloudwatch:GetMetricData
cloudwatch:DescribeAlarms
ec2:DescribeTags
while B is "a bit" larger:
autoscaling:Describe*
cloudwatch:Describe*
cloudwatch:Get*
cloudwatch:List*
logs:Get*
logs:List*
logs:StartQuery
logs:StopQuery
logs:Describe
logs:TestMetricFilter
logs:FilterLogEvents
oam:ListSinks
sns:Get*
sns:List*
upvoted 1 times
3 months, 4 weeks ago
But how can the manager use an IAM role when the question says they do not have an AWS account?
upvoted 2 times
4 months ago
i will go with B, because its asking for a user and for everyone who gets the link.
The most secure and least privileged solution for providing access to an Amazon CloudWatch dashboard for a user without an AWS account is to
create an IAM user for the product manager with the appropriate permissions. By attaching the CloudWatchReadOnlyAccess policy to the user, the
product manager can access only the read-only activities of Amazon CloudWatch, as per the principle of least privilege. The solutions architect
should then share the login credentials and browser URL of the correct dashboard with the product manager.
Option A is incorrect because it is not secure as it requires sharing the dashboard link, which could lead to unauthorized access.
upvoted 2 times
5 months, 1 week ago
Selected Answer: A
The answer is A, because the question says to follow the principle of least privileges.
When sharing a dashboard by providing an e-mail address, AWS creates an IAM role behind the scenes with only 4 permissions:
- cloudwatch:GetInsightRuleReport
- cloudwatch:GetMetricData
- cloudwatch:DescribeAlarms
- ec2:DescribeTags
The person you share the dashboard with has to enter a username + password every time they want to see the dashboard (even without having an
IAM user!) and they will then get the permissions assigned to the previously created IAM role (happening behind the scenes).
Option B suggests creating an IAM user with the CloudWatchReadOnlyAccess policy, which provides far more access than the 4 permissions listed
above.
upvoted 4 times
5 months, 1 week ago
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html#share-cloudwatch-dashboard-
iamrole
upvoted 1 times
5 months, 1 week ago
Answer: A
https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=testing
To share a dashboard publicly
upvoted 1 times
5 months, 1 week ago
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html
upvoted 1 times
5 months, 1 week ago
To share a dashboard with specific users
upvoted 1 times
Topic 1
Question #28
A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally
by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company
must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?
A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the
company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed
Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
C. Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
Correct Answer:
A
Highly Voted
8 months ago
Selected Answer: B
Tricky question!!! forget one-way or two-way. In this scenario, AWS applications (Amazon Chime, Amazon Connect, Amazon QuickSight, AWS
Single Sign-On, Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, AWS Client VPN, AWS Management Console, and AWS Transfer
Family) need to be able to look up objects from the on-premises domain in order for them to function. This tells you that authentication needs to
flow both ways. This scenario requires a two-way trust between the on-premises and AWS Managed Microsoft AD domains.
It is a requirement of the application
Scenario 2: https://aws.amazon.com/es/blogs/security/everything-you-wanted-to-know-about-trusts-with-aws-managed-microsoft-ad/
upvoted 38 times
1 month, 3 weeks ago
What I did find though was documentation that explicitly states that IAM Identity Center (successor to AWS SSO) requires a two-way trust:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 4 times
1 month, 3 weeks ago
The problem with this is that nowhere in the question is it saying that the application needs to be able to flow back so two-way is not needed.
upvoted 1 times
Highly Voted
7 months, 3 weeks ago
Answer B as we have AWS SSO which requires two way trust. As per documentation - A two-way trust is required for AWS Enterprise Apps such as
Amazon Chime, Amazon Connect, Amazon QuickSight, AWS IAM Identity Center (successor to AWS Single Sign-On), Amazon WorkDocs, Amazon
WorkMail, Amazon WorkSpaces, and the AWS Management Console. AWS Managed Microsoft AD must be able to query the users and groups in
your self-managed AD.
Amazon EC2, Amazon RDS, and Amazon FSx will work with either a one-way or two-way trust.
upvoted 8 times
1 month, 3 weeks ago
I found the documentation that explicitly states that IAM Identity Center (successor to AWS SSO) requires a two-way trust:
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 1 times
Most Recent
1 week, 3 days ago
Selected Answer: A
The recommended solution is option A: Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console and create a one-way forest trust or a
one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO using AWS Directory Service for Microsoft
Active Directory.
By implementing this solution, the company can achieve a single sign-on experience for their AWS accounts while maintaining central control over
user and group management in their on-premises Active Directory. The one-way trust ensures that user and group information flows securely from
the on-premises directory to AWS SSO, allowing for centralized access management and control across all AWS accounts.
upvoted 2 times
2 weeks, 6 days ago
Selected Answer: D
Can we have a moderator ?
upvoted 1 times
Community vote distribution
B (78%)
A (19%)
1 month, 1 week ago
A is correct
Option B comes with security risk two way trust
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 1 times
1 month, 2 weeks ago
AWS IAM Identity Center (successor to AWS Single Sign-On) requires a two-way trust so that it has permissions to read user and group.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: A
This is the answer of chatGPT:
Option A is the best solution that meets the requirements of providing a single sign-on (SSO) solution across all the company's accounts while
continuing to manage users and groups in the on-premises self-managed Microsoft Active Directory.
Explanation:
Option A is the best solution as it enables AWS Single Sign-On (AWS SSO) from the AWS SSO console and creates a one-way forest trust or a one-
way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft
Active Directory. This solution allows the company to manage users and groups in the on-premises Active Directory and provides a single sign-on
(SSO) experience across all the company's AWS accounts.
upvoted 2 times
2 months ago
I think its A. From ChatGpt:
=========
Should this be one way or two way trust?
To integrate AWS SSO with an on-premises Microsoft Active Directory, a one-way trust relationship should be established.
In a one-way trust relationship, the on-premises Microsoft Active Directory trusts the AWS SSO directory, but the AWS SSO directory does not trust
the on-premises Microsoft Active Directory. This means that users and groups in the on-premises Microsoft Active Directory can be mapped to
AWS SSO users and groups, but not vice versa.
This is the recommended approach for security reasons, as it ensures that the on-premises Microsoft Active Directory is not exposed to external
entities. The one-way trust relationship also simplifies administration and reduces the risk of errors in configuration.
upvoted 2 times
1 week ago
that's wrong!!! think yourself instead of relying on software:
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Answer is B
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
A two-way trust would enable AWS SSO to retrieve user and group information from the on-premises AD domain, and would also allow changes
made to users and groups in AWS SSO to be synchronized back to the on-premises AD.
Option A, which suggests creating a one-way trust relationship, would not enable synchronization of user and group information between AWS
SSO and the on-premises AD domain.
upvoted 1 times
3 months ago
Selected Answer: B
It's B.
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: D
D. I'm going for this because adding the AWS directory service means that you can manage adding users within AWS as well as on prem. Installing
an identity provider on premises (like AD Federation Service) means you can continue to manage everything on premises and use SAML with SSO
upvoted 1 times
3 months, 4 weeks ago
B
Create a two-way trust relationship – When two-way trust relationships are created between AWS Managed Microsoft AD and a self-managed
directory in AD, users in your self-managed directory in AD can sign in with their corporate credentials to various AWS services and business
applications. One-way trusts do not work with IAM Identity Center.
AWS IAM Identity Center (successor to AWS Single Sign-On) requires a two-way trust so that it has permissions to read user and group information
from your domain to synchronize user and group metadata. IAM Identity Center uses this metadata when assigning access to permission sets or
applications. User and group metadata is also used by applications for collaboration, like when you share a dashboard with another user or group.
The trust from AWS Directory Service for Microsoft Active Directory to your domain permits IAM Identity Center to trust your domain for
authentication. The trust in the opposite direction grants AWS permissions to read user and group metadata.
upvoted 2 times
4 months, 1 week ago
The solution that will meet these requirements is option A, which is to enable AWS Single Sign-On (AWS SSO) from the AWS SSO console and
create a one-way forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using
AWS Directory Service for Microsoft Active Directory.
This option provides a secure and efficient way to integrate the company's on-premises Microsoft Active Directory with AWS SSO, allowing users to
log in to AWS accounts and applications using their existing Active Directory credentials. A one-way trust enables authentication from the Active
Directory to AWS SSO, but not the other way around, ensuring that the Active Directory is not exposed to security risks from AWS SSO.
upvoted 2 times
5 months, 1 week ago
D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.
The company can use AWS SSO to enable SSO across all the company's accounts that are managed by AWS Organizations. To achieve this, the
company will need to deploy an identity provider (IdP) on-premises, such as Microsoft Active Directory, and configure it to work with AWS SSO.
This will allow the company to continue managing its users and groups in the on-premises self-managed Microsoft Active Directory, while also
providing SSO across all the company's AWS accounts.
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
It's B. In order to connect an on-premise MS AD to AWS SSO (now AWS Identity Centre), you can either use an AD Connector (not one of the
options) or a 2-way trust relationship between an AWS Managed MS AD and an on-premise MS AD.
The AWS docs specifically say that a 1-way trust relationship does NOT work with SSO.
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html
upvoted 2 times
Topic 1
Question #29
A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that
run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?
A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
NLB as an AWS Global Accelerator endpoint in each Region.
B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the
ALB as an AWS Global Accelerator endpoint in each Region.
C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an
Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as
an origin.
D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create
an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted
record as an origin.
Correct Answer:
C
Highly Voted
8 months, 1 week ago
Selected Answer: A
agree with A,
Global Accelerator has automatic failover and is perfect for this scenario with VoIP
https://aws.amazon.com/global-accelerator/faqs/
upvoted 28 times
8 months ago
Thank you for your link, it make me consolidate A.
upvoted 6 times
5 months, 1 week ago
This option does not meet the requirements because AWS Global Accelerator is only used to route traffic to the optimal AWS Region, it does
not provide automatic failover between regions.
upvoted 2 times
3 months, 4 weeks ago
Instant regional failover: AWS Global Accelerator automatically checks the health of your applications and routes user traffic only to
healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts
instantaneously to route your users to the next available endpoint.
upvoted 3 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: A
CloudFront uses Edge Locations to cache content while Global Accelerator uses Edge Locations to find an optimal pathway to the nearest regional
endpoint. CloudFront is designed to handle HTTP protocol meanwhile Global Accelerator is best used for both HTTP and non-HTTP protocols such
as TCP and UDP. so i think A is a better answer
upvoted 20 times
Most Recent
1 week, 3 days ago
Selected Answer: A
Option A, which suggests deploying a Network Load Balancer (NLB) and using it as an AWS Global Accelerator endpoint in each Region, does
provide automated failover between Regions.
When using AWS Global Accelerator, it automatically routes traffic to the closest AWS edge location based on latency and network conditions. In
case of a failure in one Region, AWS Global Accelerator will automatically reroute traffic to the healthy endpoints in another Region, providing
automated failover.
So, option A does meet the requirement for automated failover between Regions, in addition to routing users to the Region with the lowest latency
using AWS Global Accelerator.
upvoted 2 times
4 weeks, 1 day ago
Community vote distribution
A (82%)
Other
Selected Answer: C
If the answer is A, how exactly we can accomplish this: "route users to the Region with the lowest latency"
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
UDP --> NLB --> A or C.
I believe C is not an option because you cannot set up a route 53 record as a cloudfront origin:
https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_Origin.html
upvoted 1 times
1 month, 2 weeks ago
Instant regional failover: AWS Global Accelerator automatically checks the health of your applications and routes user traffic only to healthy
application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts instantaneously to route
your users to the next available endpoint.
upvoted 1 times
1 month, 3 weeks ago
To keep it as simple as possible:
UDP? -> NLB
Failover? -> Global Accelerator. It has "Instant regional failover" which can be found explained here under "benefits"
https://aws.amazon.com/global-accelerator/faqs/
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: A
This is the answer of ChatGPT "Option A is the best solution that meets the requirements of routing users to the Region with the lowest latency and
providing automated failover between Regions for the company's Voice over Internet Protocol (VoIP) service that uses UDP connections.
Explanation:
Option A is the best solution as it deploys a Network Load Balancer (NLB) and an associated target group, and associates the target group with the
Auto Scaling group. The NLB can be used as an AWS Global Accelerator endpoint in each Region, allowing users to be routed to the Region with
the lowest latency. Additionally, the NLB can automatically failover between Regions to ensure service availability.
Option B is not the best solution as an Application Load Balancer (ALB) is designed for HTTP/HTTPS traffic and may not be suitable for the
company's VoIP service that uses UDP connections."
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Option A is the best solution for this scenario.
Deploying a Network Load Balancer (NLB) and using it as an AWS Global Accelerator endpoint in each Region will ensure that traffic is
automatically routed to the Region with the lowest latency. This solution also provides automated failover between Regions. NLB is designed to
handle high UDP traffic volumes and provide low latency, making it a good choice for a VoIP service that uses UDP connections.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Key words:
UPD - NLB
failover - AWS global accelerator
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
AWS Global Accelerator is a service that improves the availability and performance of applications running in multiple AWS Regions or across
multiple AWS accounts. It allows clients to be routed to the optimal AWS endpoint based on the proximity to the edge location that has the lowest
latency.
Here's a high-level overview of how AWS Global Accelerator works:
A client sends a request to an AWS Global Accelerator endpoint that resolves to a static IP address.
AWS Global Accelerator determines the optimal AWS endpoint for the client based on the proximity of the edge location to the client and the
health of the endpoints in the AWS Regions.
The client is then routed to the optimal endpoint for their request, which could be an Elastic IP address, an Amazon EC2 instance, or an Amazon
Elastic Load Balancer.
If an endpoint becomes unhealthy, AWS Global Accelerator detects it and automatically redirects traffic to healthy endpoints.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
Option A would be the best solution to meet the company's requirements of routing users to the Region with the lowest latency and enabling
automated failover between Regions.
Deploying a Network Load Balancer (NLB) and an associated target group would allow the company to distribute traffic to the EC2 instances in the
Auto Scaling group based on the UDP protocol. By using the NLB as an AWS Global Accelerator endpoint in each Region, traffic can be
automatically routed to the Region with the lowest latency.
upvoted 1 times
4 months ago
Answer is A, Cloudfront can be discounted as it is not for UDP traffic
upvoted 1 times
4 months, 1 week ago
Amazon Route 53 Latency Record: Supports failover across Regions, enabling traffic to be routed to another Region if the primary Region becomes
unavailable. NLB as an AWS Global Accelerator Endpoint: Supports failover within a Region, enabling traffic to be distributed to other targets if one
or more targets become unavailable.The first approach can provide better end-user latency and high availability, but at the cost of additional
complexity and cost. The second approach provides a simpler and more streamlined solution, but may not be as effective in reducing end-user
latency or providing failover support.
upvoted 1 times
4 months, 1 week ago
Amazon Route 53 Latency Record: Supports failover across Regions, enabling traffic to be routed to another Region if the primary Region becomes
unavailable. NLB as an AWS Global Accelerator Endpoint: Supports failover within a Region, enabling traffic to be distributed to other targets if one
or more targets become unavailable.
upvoted 1 times
4 months, 2 weeks ago
Answer is C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create
an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as
an origin.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
A. Global accelerator will connect all regions, it has low latency and failover.
upvoted 1 times
Topic 1
Question #30
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights
enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of
running the tests without reducing the compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
A. Stop the DB instance when tests are completed. Restart the DB instance when required.
B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Selected Answer: C
Answer C, you still pay for storage when an RDS database is stopped
upvoted 24 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
C - Create a manual Snapshot of DB and shift to S3- Standard and Restore form Manual Snapshot when required.
Not A - By stopping the DB although you are not paying for DB hours you are still paying for Provisioned IOPs , the storage for Stopped DB is more
than Snapshot of underlying EBS vol. and Automated Back ups .
Not D - Is possible but not MOST cost effective, no need to run the RDS when not needed.
upvoted 7 times
Most Recent
1 week, 3 days ago
Selected Answer: C
Option C can be a cost-effective solution for reducing the cost of running tests on the RDS instance.
By creating a snapshot and terminating the DB instance, you effectively stop incurring costs for the running instance. When you need to run the
tests again, you can restore the snapshot to create a new instance and resume testing. This approach allows you to save costs during the periods
when the tests are not running.
However, it's important to note that option C involves additional steps and may result in some downtime during the restoration process. You need
to consider the time required for snapshot creation, termination, and restoration when planning the testing schedule.
upvoted 2 times
1 week, 3 days ago
Selected Answer: C
Can't be A because you're still charged for provisioned storage even when it's stopped.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: C
By only stopping an Amazon RDS DB instance, you stop billing for additional instance hours, but you will still incur storage costs. See:
https://aws.amazon.com/rds/pricing/
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: C
Trick: in a stopped RDS database, you will still pay for storage. If you plan on
stopping it for a long time, you should snapshot & restore instead
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: C
Compare A and C, for a 48 hours usage among a month, C's cost lower.
upvoted 1 times
2 months, 4 weeks ago
Community vote distribution
C (87%)
13%
Selected Answer: A
Option A, stopping the DB instance when tests are completed and restarting it when required, would be the most cost-effective solution to reduce
the cost of running the tests while maintaining the same compute and memory attributes of the DB instance.
By stopping the DB instance when the tests are completed, the company will only be charged for storage and not for compute resources while the
instance is stopped. This can result in significant cost savings as compared to running the instance continuously.
When the tests need to be run again, the company can simply start the DB instance, and it will be available for use. This solution is straightforward
and does not require any additional configuration or infrastructure.
upvoted 2 times
2 months, 1 week ago
if you stopped RDS it gets auto start after 7 days. Here the requirement is once in month ..hence C
upvoted 1 times
3 months ago
Selected Answer: C
C is the most cost effective.
upvoted 1 times
5 months, 2 weeks ago
You can't stop an Amazon RDS for SQL Server DB instance in a Multi-AZ configuration.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Amazon RDS for MySQL allows you to create a snapshot of your DB instance and store it in Amazon S3. You can then terminate the DB instance
and restore it from the snapshot when required. This will allow you to reduce the cost of running the resource-intensive tests without reducing the
compute and memory attributes of the DB instance.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
C is right choice here
upvoted 1 times
6 months ago
Selected Answer: C
Explanation from the same question on UDEMY!
Taking a snapshot of the instance and storing the snapshot is the most cost-effective solution. When needed, a new database can be created from
the snapshot. Performance Insights can be enabled on the new instance if needed. Note that the previous data from Performance Insights will not
be associated with the new instance, however this was not a requirement.
CORRECT: "Create a snapshot of the database when the tests are completed. Terminate the DB instance. Create a new DB instance from the
snapshot when required” is the correct answer (as explained above.)
upvoted 4 times
6 months ago
INCORRECT: "Stop the DB instance once all tests are completed. Start the DB instance again when required” is incorrect. You will be charged
when your instance is stopped. When an instance is stopped you are charged for provisioned storage, manual snapshots, and automated
backup storage within your specified retention window, but not for database instance hours. This is more costly compared to using snapshots.
INCORRECT: "Create an Auto Scaling group for the DB instance and reduce the desired capacity to 0 once the tests are completed” is incorrect.
You cannot use Auto Scaling groups with Amazon RDS instances.
INCORRECT: "Modify the DB instance size to a smaller capacity instance when all the tests have been completed. Scale up again when required”
is incorrect. This will reduce compute and memory capacity and will be more costly than taking a snapshot and terminating the DB.
upvoted 2 times
6 months, 1 week ago
Answer is C,
Because the question say monthly test, and you can stop a DB instance for up to seven days. If you don't manually start your DB instance after
seven days, your DB instance is automatically started.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html
So, in this case, if it run a test once a month, creating a snapshot is more appropriate and cost-effective way.
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
Option A, stopping the DB instance when tests are completed and restarting it when required, would be the most cost-effective solution for
reducing the cost of running resource-intensive tests on an Amazon RDS for MySQL DB instance.
By stopping the DB instance, you will no longer be charged for any compute or memory resources used by the instance. When the tests are
completed, you can restart the DB instance to resume using it. This will allow you to avoid paying for resources that are not being used, while still
maintaining the same compute and memory attributes of the DB instance for the tests.
upvoted 2 times
6 months, 1 week ago
Option B, using an Auto Scaling policy with the DB instance to automatically scale when tests are completed, would not be a cost-effective
solution as it would not reduce the cost of running the tests. Auto Scaling allows you to automatically increase or decrease the capacity of your
DB instance based on predefined rules, but it does not provide a way to reduce the cost of running the tests.
Option C, creating a snapshot when tests are completed and then terminating the DB instance and restoring the snapshot when required, would
also not be a cost-effective solution. While creating a snapshot can be a useful way to save a copy of your database, it does not reduce the cost
of running the tests. Additionally, restoring a snapshot to a new DB instance would require you to pay for the resources used by the new
instance.
upvoted 1 times
5 months, 2 weeks ago
https://docs.aws.amazon.com/pt_br/AmazonRDS/latest/UserGuide/USER_StopInstance.html
Important
"You can stop a DB instance for up to seven days. If you don't manually start your DB instance after seven days, your DB instance is
automatically started. This way, it doesn't fall behind any required maintenance updates."
upvoted 3 times
6 months, 1 week ago
Option D, modifying the DB instance to a low-capacity instance when tests are completed and then modifying it back again when required,
would not meet the requirement to maintain the same compute and memory attributes of the DB instance for the tests. Modifying the DB
instance to a low-capacity instance would result in a reduction in the resources available to the DB instance, which would not be sufficient for
the resource-intensive tests.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C is the best and most cost effective option
upvoted 1 times
6 months, 2 weeks ago
A
Stopping the DB instance when tests are completed and restarting it when required will be the most cost-effective solution for reducing the cost of
running the resource-intensive tests. When an Amazon RDS for MySQL DB instance is stopped, the instance will no longer be charged for compute
and memory usage, which will significantly reduce the cost of running the tests. Option C is not correct for me, it is because, Snapshots are used to
create backups of data, but do not reduce the cost of running a DB instance.
upvoted 1 times
Topic 1
Question #31
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift
clusters are con gured with tags. The company wants to minimize the effort of con guring and operating this check.
What should a solutions architect do to accomplish this?
A. Use AWS Con g rules to de ne and detect resources that are not properly tagged.
B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to
periodically run the code.
Correct Answer:
A
Highly Voted
6 months, 1 week ago
Answer from ChatGPT:
Yes, you can use AWS Config to create tags for your resources. AWS Config is a service that enables you to assess, audit, and evaluate the
configurations of your AWS resources. You can use AWS Config to create rules that automatically tag resources when they are created or when
their configurations change.
To create tags for your resources using AWS Config, you will need to create an AWS Config rule that specifies the tag key and value you want to
use and the resources you want to apply the tag to. You can then enable the rule and AWS Config will automatically apply the tag to the specified
resources when they are created or when their configurations change.
upvoted 11 times
Most Recent
1 week, 3 days ago
Selected Answer: A
AWS Config provides a set of pre-built or customizable rules that can be used to check the configuration and compliance of AWS resources. By
creating a custom rule or using the built-in rule for tagging, you can define the required tags for EC2, RDS DB and Redshift clusters. AWS Config
continuously monitors the resources and generates configuration change events or evaluation results.
By leveraging AWS Config, the solution can automatically detect any resources that do not comply with the defined tagging requirements. This
approach eliminates the need for manual checks or periodic code execution, reducing operational overhead. Additionally, AWS Config provides the
ability to automatically remediate non-compliant resources by triggering Lambda or sending notifications, further streamlining the configuration
management process.
Option B (using Cost Explorer) primarily focuses on cost analysis and does not provide direct enforcement of proper tagging. Option C and D
(writing API calls and running them manually or through scheduled Lambda) require more manual effort and maintenance compared to using AWS
Config rules.
upvoted 2 times
1 week, 5 days ago
Selected Answer: A
The answer is A
upvoted 1 times
3 weeks, 5 days ago
Selected Answer: A
Option will accomplish the requirements
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
AWS Config can track the configuration status of non-compliant resouces :))
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
AWS Config can track the configuration status of non-compliant resouces.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
Community vote distribution
A (97%)
Option A is the most appropriate solution to accomplish the given requirement because AWS Config Rules provide a way to evaluate the
configuration of AWS resources against best practices and company policies. In this case, a custom AWS Config rule can be defined to check for
proper tag allocation on Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters. The rule can be configured to run
periodically and notify the responsible parties when a resource is not properly tagged.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
Key words: configured with tags
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
AWS Config is a service that provides a detailed view of the configuration of AWS resources in an account. AWS Config rules can be used to define
and detect resources that are not properly tagged. These rules can be customized to match specific requirements and automatically check all
resources for proper tag allocation. When resources are found without the proper tags, AWS Config can trigger an SNS notification or an AWS
Lambda function to perform the required action.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related
to one another, and how the configurations and their relationships have changed over time.
upvoted 1 times
4 months, 3 weeks ago
I found this question very vague.
upvoted 2 times
5 months, 2 weeks ago
D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically
run the code.
A solution architect can accomplish this by writing API calls to check all resources (EC2 instances, RDS DB instances, and Redshift clusters) for
proper tag allocation. Then, schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code. This way, the check will
be automated and it eliminates the need to manually check and configure the resources. The Lambda function can be triggered periodically and
will check all resources, this way it will minimize the effort of configuring and operating the check.
upvoted 2 times
4 months, 3 weeks ago
How about the key sentence "The company wants to minimize the effort of configuring and operating this check". Either A or B and i vouch for
A
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
are configured with tags = AWS config
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
To minimize the effort of ensuring that all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are properly tagged, a
solutions architect should use AWS Config rules to define and detect resources that are not properly tagged.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use Config rules to define
conditions for resources in your AWS environment and then automatically check whether those conditions are met. If a resource does not meet the
conditions specified by a Config rule, the rule can trigger an AWS Config event that can be used to take corrective action.
upvoted 4 times
6 months, 1 week ago
Using AWS Config rules to define and detect resources that are not properly tagged allows you to automate the process of checking for and
correcting improperly tagged resources. This will minimize the effort required to configure and operate this check, as you will not need to
manually check for or tag improperly tagged resources.
Option B, using Cost Explorer to display resources that are not properly tagged and then tagging those resources manually, would not be an
effective solution as it would require manual effort to identify and tag improperly tagged resources.
upvoted 2 times
6 months, 1 week ago
Option C, writing API calls to check all resources for proper tag allocation and then running the code periodically on an EC2 instance, would
also not be an effective solution as it would require manual effort to run the code and check for improperly tagged resources.
Option D, writing API calls to check all resources for proper tag allocation and scheduling an AWS Lambda function through Amazon
CloudWatch to periodically run the code, would be a more automated solution than option C, but it would still require manual effort to write
and maintain the code and schedule the Lambda function. Using AWS Config rules would be a more efficient and effective way to automate
the process of checking for and correcting improperly tagged resources.
upvoted 3 times
6 months, 2 weeks ago
D is correct
AWS Lambda function through Amazon CloudWatch to periodically run the code. This will enable the company to automatically check its resources
for proper tag allocation without the need for manual intervention. Option A is not correct for me, it is because, AWS Config rules cannot be used
to detect resources that are not properly tagged. AWS Config rules can be used to evaluate the configuration of resources, but not to check for
proper tag allocation.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/config/latest/developerguide/tagging.html
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #32
A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side
JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?
A. Containerize the website and host it in AWS Fargate.
B. Create an Amazon S3 bucket and host the website there.
C. Deploy a web server on an Amazon EC2 instance to host the website.
D. Con gure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Good answer is B: client-side JavaScript. the website is static, so it must be S3.
upvoted 17 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
HTML, CSS, client-side JavaScript, and images are all static resources.
upvoted 7 times
Most Recent
1 week, 3 days ago
Selected Answer: B
By using Amazon S3 to host the website, you can take advantage of its durability, scalability, and low-cost pricing model. You only pay for the
storage and data transfer associated with your website, without the need for managing and maintaining web servers or containers. This reduces the
operational overhead and infrastructure costs.
Containerizing the website and hosting it in AWS Fargate (option A) would involve additional complexity and costs associated with managing the
container environment and scaling resources. Deploying a web server on an Amazon EC2 instance (option C) would require provisioning and
managing the EC2 instance, which may not be cost-effective for a static website. Configuring an Application Load Balancer with an AWS Lambda
target (option D) adds unnecessary complexity and may not be the most efficient solution for hosting a static website.
upvoted 2 times
3 weeks, 5 days ago
Selected Answer: B
Option B is the MOST cost-effective for hosting the website.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
static website = B
upvoted 1 times
1 month, 4 weeks ago
Since all are static, S3 can be use to host it
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
static website B
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
static website so B
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
With S3, the company can store and serve its website contents, such as HTML, CSS, client-side JavaScript, and images, as static content. The cost of
hosting a website on S3 is relatively low as compared to other options because S3 pricing is based on storage and data transfer usage, which is
Community vote distribution
B (100%)
generally less expensive than other hosting options like EC2 instances or containers. Additionally, there is no charge for serving data from an S3
bucket, so there are no additional costs associated with traffic.
upvoted 2 times
3 months, 3 weeks ago
Selected Answer: B
The most cost-effective method for hosting the website is option B: Create an Amazon S3 bucket and host the website there.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
The most cost-effective method for hosting the website is option B: Create an Amazon S3 bucket and host the website there.
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
static content thru S3
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
In general, it is more cost-effective to use S3 for hosting static website content because it is a lower-cost storage service compared to Fargate,
which is a compute service
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
The most cost-effective method for hosting a website that consists of HTML, CSS, client-side JavaScript, and images would be to create an Amazon
S3 bucket and host the website there.
Amazon S3 (Simple Storage Service) is an object storage service that enables you to store and retrieve data over the internet. It is a highly scalable,
reliable, and low-cost storage service that is well-suited for hosting static websites. You can use Amazon S3 to host a website by creating a bucket,
uploading your website content to the bucket, and then configuring the bucket as a static website hosting location.
upvoted 1 times
6 months, 1 week ago
Hosting a website in an Amazon S3 bucket is generally more cost-effective than hosting it on an Amazon EC2 instance or using a containerized
solution like AWS Fargate, as it does not require you to pay for compute resources. It is also more cost-effective than configuring an Application
Load Balancer with an AWS Lambda target that uses the Express.js framework, as this approach would require you to pay for both compute
resources and the use of the Application Load Balancer and AWS Lambda.
In summary, hosting a website in an Amazon S3 bucket is the most cost-effective method for hosting a website that consists of HTML, CSS,
client-side JavaScript, and images.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Static website = S3
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
B looks correct
upvoted 1 times
Topic 1
Question #33
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The
company needs a scalable, near-real-time solution to share the details of millions of nancial transactions with several other internal applications.
Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write.
Use DynamoDB Streams to share the transactions data with other applications.
B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda
integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every
transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis
data stream.
D. Store the batched transactions data in Amazon S3 as les. Use AWS Lambda to process every le and remove sensitive data before
updating the les in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume
transaction les stored in Amazon S3.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
I would go for C. The tricky phrase is "near-real-time solution", pointing to Firehouse, but it can't send data to DynamoDB, so it leaves us with C as
best option.
Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace,
Sumologic, LogicMonitor, MongoDB, and HTTP End Point as destinations.
https://aws.amazon.com/kinesis/data-
firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.
upvoted 35 times
3 months, 1 week ago
There are many questions having Firehose and Stream. Need to know them in detail to answer. Thanks for the explanation
upvoted 3 times
5 months ago
This was a really tough one. But you have the best explanation on here with reference point. Thanks. I’m going with answer C!
upvoted 2 times
4 months, 3 weeks ago
Sorry but I still can't see how Kinesis Data Stream is 'scalable', since you have to provision the quantity of shards in advance?
upvoted 1 times
4 months ago
"easily stream data at any scale"
This is a description of Kinesis Data Stream. I think you can configure its quantity but still not provision and manage scalability by yourself.
upvoted 1 times
Highly Voted
8 months, 1 week ago
The answer is C, because Firehose does not suppport DynamoDB and another key word is "data" Kinesis Data Streams is the correct choice. Pay
attention to key words. AWS likes to trick you up to make sure you know the services.
upvoted 23 times
Most Recent
1 week, 3 days ago
Selected Answer: C
To meet the requirements of sharing financial transaction details with several other internal applications, and processing and storing the
transactions data in a scalable and near-real-time manner, a solutions architect should recommend option C: Stream the transactions data into
Amazon Kinesis Data Streams, use AWS Lambda integration to remove sensitive data, and then store the transactions data in Amazon DynamoDB.
Other applications can consume the transactions data off the Kinesis data stream.
Option A (storing transactions data in DynamoDB and using DynamoDB Streams) may not provide the same level of scalability and real-time data
sharing as Kinesis Data Streams. Option B (using Kinesis Data Firehose to store data in DynamoDB and S3) adds unnecessary complexity and
Community vote distribution
C (76%)
B (24%)
additional storage costs. Option D (storing batched transactions data in S3 and processing with Lambda) may not provide the required near-real-
time data sharing and low-latency retrieval compared to the streaming-based solution.
upvoted 2 times
1 week, 6 days ago
Selected Answer: C
its c because yes
upvoted 1 times
2 weeks, 1 day ago
I think it is B. Kinesis data stream can import data from DynamoDB, but can not export data to DynamoDB. Data stream only support to export to
Lamda, Kinesis Firehose,Kinesis Analytics or AWS Glue. Data stream's exporting to other object need to ETL transform process , which is Firehose's
function.
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: B
near real time - firehose
besides - dynamo is no the destination, lambda is
and lambda can be used since you can expose it behind http
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
That is definitely B:
It is saying "near real time" that makes sense :
near real time : Kinesis Data Firehose
real time : Kinesis Data Stream
Also, Kinesis Data Firehose supports DynamoDB. The link is below :
https://dynobase.dev/dynamodb-faq/can-firehose-write-to-dynamodb/#:~:text=Answer,data%20to%20a%20DynamoDB%20table.
upvoted 1 times
1 month, 1 week ago
The problem says that Firehose will store data in Amazon DynamoDB and Amazon S3, I think it's not possible to have more than one consumer,
so B solution is impossible
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: B
for me the awswer is B. kinesis data firehose can transfert data to dynamoDB and the key world in the question : Near Real Time
Real Time = Kinesis Data Stream
Near Real Time = Kinesis Data Firehose
upvoted 4 times
2 months, 3 weeks ago
Selected Answer: B
Kinesis Data Firehose does have integration with Lambda. Kinesis Data Strems does not have that integration so B is correct
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: C
Near Real Time : Kinesis Data Stream & Kinesis Data Firehouse
Kinesis Data Stream :: used for streaming live data
Kinesis Data Firehouse :: used when you have to store the streaming data into S3, Redshift etc
upvoted 5 times
2 months, 4 weeks ago
Selected Answer: C
This solution meets the requirements for scalability, near-real-time processing, and sharing data with several internal applications. Kinesis Data
Streams is a fully managed service that can handle millions of transactions per second, making it a scalable solution. Using Lambda to process the
data and remove sensitive information provides a fast and efficient method to perform data transformation in near-real-time. Storing the
processed data in DynamoDB allows for low-latency retrieval, and the data can be shared with other applications using the Kinesis data stream.
upvoted 2 times
2 months, 4 weeks ago
C : B is incorrect , coz firehouse can't work with lambda
upvoted 1 times
3 months ago
Selected Answer: C
Kinesis Data Firehose doesn't support DynamoDB as a destination.
https://docs.aws.amazon.com/firehose/latest/dev/create-name.html
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: C
Kinesis Data Streams focuses on ingesting and storing data streams. Kinesis Data Firehose focuses on delivering data streams to select destinations.
Both can ingest data streams but the deciding factor in which to use depends on where your streamed data should go to.
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: C
I was confused B because it's the phrase "near-real-time", but the destination of Firehose can not be DynamoDB.
https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html
upvoted 1 times
4 months ago
Answer B. Question says: "The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with
several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database".
So, only the data stored in database needs to be sensitized NOT the ones which is to be stored in S3. Option C is wrong because option C says:
"Use AWS Lambda integration to remove sensitive data from every transaction" which is NOT what the question asks for.
upvoted 1 times
4 months ago
Selected Answer: B
My vote is: option B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use
AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
This question has 2 requirements:
1. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal
applications.
2. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
upvoted 2 times
Topic 1
Question #34
A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track con guration
changes on its AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?
A. Use AWS CloudTrail to track con guration changes and AWS Con g to record API calls.
B. Use AWS Con g to track con guration changes and AWS CloudTrail to record API calls.
C. Use AWS Con g to track con guration changes and Amazon CloudWatch to record API calls.
D. Use AWS CloudTrail to track con guration changes and Amazon CloudWatch to record API calls.
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
CloudTrail - Track user activity and API call history.
Config - Assess, audits, and evaluates the configuration and relationships of tag resources.
Therefore, the answer is B
upvoted 23 times
Most Recent
6 days, 14 hours ago
Selected Answer: B
config => AWS config
record API calls => AWS CloudTrail
upvoted 1 times
1 week, 3 days ago
Selected Answer: B
To meet the requirement of tracking configuration changes on AWS resources and recording a history of API calls, a solutions architect should
recommend option B: Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
Option A (using CloudTrail to track configuration changes and Config to record API calls) is incorrect because CloudTrail is specifically designed to
capture API call history, while Config is designed for tracking configuration changes.
Option C (using Config to track configuration changes and CloudWatch to record API calls) is not the recommended approach. While CloudWatch
can be used for monitoring and logging, it does not provide the same level of detail and compliance tracking as CloudTrail for recording API calls.
Option D (using CloudTrail to track configuration changes and CloudWatch to record API calls) is not the optimal choice because CloudTrail is the
appropriate service for tracking configuration changes, while CloudWatch is not specifically designed for recording API call history.
upvoted 2 times
3 weeks, 5 days ago
Selected Answer: B
Option B meets ruirements.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
AWS Config is a fully managed service that allows the company to assess, audit, and evaluate the configurations of its AWS resources. It provides a
detailed inventory of the resources in use and tracks changes to resource configurations. AWS Config can detect configuration changes and alert
the company when changes occur. It also provides a historical view of changes, which is essential for compliance and governance purposes.
AWS CloudTrail is a fully managed service that provides a detailed history of API calls made to the company's AWS resources. It records all API
activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call. This
information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might occur on its AWS
resources.
upvoted 3 times
3 months, 3 weeks ago
Selected Answer: B
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of
configuration changes made to your resources and can be used to track changes made to your resources over time.
Community vote distribution
B (98%)
AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: B
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of
configuration changes made to your resources and can be used to track changes made to your resources over time.
AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 1 times
4 months, 1 week ago
Selected Answer: B
AWS Config is basically used to track config changes, while cloudtrail is to monitor API calls
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls. This option is the best because it utilizes both AWS
CloudTrail and AWS Config, which are both designed for tracking and recording different types of information related to AWS resources and API
calls. AWS CloudTrail is used to track user activity and API call history, and AWS Config is used to assess, audit, and evaluate the configuration and
relationships of tag resources. Together, they provide a comprehensive and robust solution for compliance, governance, auditing, and security.
upvoted 1 times
5 months, 1 week ago
why not the B?.
AWS Config is primarily used to assess, audit, and evaluate the configuration and relationships of resources in your AWS environment. It does
not record the history of API calls made to these resources. On the other hand, AWS CloudTrail is used to track user activity and API call history.
Together, AWS Config and CloudTrail provide a complete picture of the configuration and activity on your AWS resources, which is necessary for
compliance, governance, auditing, and security. Therefore, option A is the best choice.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
CloudTrail tracks user activity as well as any API calls (think of bread crumbs leading to an culprit). Config is exactly what it sounds like;
configuration. So think audits, config changes ect.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
auditing = cloudtrail
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
The correct answer is B: Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of
configuration changes made to your resources and can be used to track changes made to your resources over time.
AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your
resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.
upvoted 2 times
6 months, 1 week ago
Together, AWS Config and AWS CloudTrail can be used to meet the requirements for compliance, governance, auditing, and security by tracking
configuration changes and recording a history of API calls made to your AWS resources.
Amazon CloudWatch is a monitoring service for AWS resources and the applications you run on the cloud. It is not specifically designed for
tracking configuration changes or recording a history of API calls.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It can track configuration changes
to your AWS resources and record a history of these changes. AWS CloudTrail is a service that records API calls made to AWS resources and logs
the API calls in a CloudTrail event.
upvoted 1 times
6 months, 1 week ago
B. ans :https://aws.amazon.com/about-aws/whats-new/2016/07/aws-cloudtrail-now-access-configuration-history-of-resources-referenced-in-your-
api-calls/
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Correct Answer is A
CloudTrail to track configuration changes and AWS Config to record API calls which Records the configuration state for the resource provided in
the request. (AWS Config is a service that records the configuration of your AWS resources and maintains a history of changes made to these
resources)AWS CloudTrail, on the other hand, is a service that records API calls made on your AWS account and delivers the log files to you. This
service can be used to track configuration changes on your AWS resources in real time. Therefore, the correct solution is to use AWS CloudTrail to
track configuration changes and AWS Config to record API calls.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
The answer is B
upvoted 1 times
Topic 1
Question #35
A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a
VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a
solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?
A. Enable Amazon GuardDuty on the account.
B. Enable Amazon Inspector on the EC2 instances.
C. Enable AWS Shield and assign Amazon Route 53 to it.
D. Enable AWS Shield Advanced and assign the ELB to it.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
Answer is D
C is incorrect because question says Third party DNS and route 53 is AWS proprietary
upvoted 19 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers,
CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
upvoted 19 times
1 month, 2 weeks ago
I´d agree as Shield Advanced is the only tier that can protect EC2 which is not possible in Standard.
upvoted 2 times
Most Recent
1 week, 3 days ago
Selected Answer: D
Option A is incorrect because Amazon GuardDuty is a threat detection service that focuses on identifying malicious activity and unauthorized
behavior within AWS accounts. While it is useful for detecting various security threats, it does not specifically address large-scale DDoS attacks.
Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and vulnerabilities
within EC2. It does not directly protect against DDoS attacks.
Option C is not the optimal choice because AWS Shield provides basic DDoS protection for resources such as Elastic IP addresses, CloudFront, and
Route53 hosted zones. However, it does not provide the advanced capabilities and assistance offered by AWS Shield Advanced, which is better
suited for protecting against large-scale DDoS attacks.
Therefore, option D with AWS Shield Advanced and assigning the ELB to it is the recommended solution to detect and protect against large-scale
DDoS attacks in the architecture described.
upvoted 4 times
3 weeks, 5 days ago
Selected Answer: D
I voting for the option D.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Key words: DDos -> Shield
upvoted 2 times
3 months, 4 weeks ago
Selected Answer: D
DDoS attack is a feature of AWS Shield, so I confused C or D. But it usually determines by Health-Check, and Health-Check runs in the level target
group of ELB. Finally, I would go with D.
upvoted 1 times
6 months ago
Community vote distribution
D (100%)
Selected Answer: D
Details when to use the service,https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
A third-party service is used for the DNS. = Not Route 53 (AWS). The company's solutions architect must recommend a solution to detect and
protect against large-scale DDoS attacks = Shield
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
The correct answer is D: Enable AWS Shield Advanced and assign the ELB to it.
AWS Shield is a service that provides DDoS protection for your AWS resources. There are two tiers of AWS Shield: AWS Shield Standard and AWS
Shield Advanced. AWS Shield Standard is included with all AWS accounts at no additional cost and provides protection against most common
network and transport layer DDoS attacks. AWS Shield Advanced provides additional protection against more complex and larger scale DDoS
attacks, as well as access to a team of DDoS response experts.
To detect and protect against large-scale DDoS attacks on a public-facing web application hosted on Amazon EC2 instances behind an Elastic Load
Balancer (ELB), you should enable AWS Shield Advanced and assign the ELB to it. This will provide advanced protection against DDoS attacks
targeting the ELB and the EC2 instances behind it.
upvoted 6 times
6 months, 1 week ago
Amazon GuardDuty is a threat detection service that analyzes network traffic and other data sources to identify potential threats to your AWS
resources. It is not specifically designed for detecting and protecting against DDoS attacks.
Amazon Inspector is a security assessment service that analyzes the runtime behavior of your Amazon EC2 instances to identify security
vulnerabilities. It is not specifically designed for detecting and protecting against DDoS attacks.
Amazon Route 53 is a DNS service that routes traffic to your resources on the internet. It is not specifically designed for detecting and
protecting against DDoS attacks.
upvoted 3 times
2 months, 3 weeks ago
hey buddy qq is this saa questions discussion enough to pass the exam?
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/elastic-load-balancing-bp6.html
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/best-practices-for-ddos-mitigation.html
You can use Shield Advanced to configure DDoS protection for Elastic IP addresses. When an Elastic IP address is assigned per Availability Zone to
the Network Load Balancer, Shield Advanced will apply the relevant DDoS protections for the Network Load Balancer traffic.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
D
https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/elastic-load-balancing-bp6.html
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 2 weeks ago
Large-scale DDoS attacks = AWS Shield Advanced
The correct answer is D
https://aws.amazon.com/shield/faqs/
https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/elastic-load-balancing-bp6.html
upvoted 5 times
7 months, 2 weeks ago
Selected Answer: D
Same reasoning as given by Ninjawarz
upvoted 1 times
8 months, 2 weeks ago
The answer is D
upvoted 3 times
Topic 1
Question #36
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company
must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in
both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an S3 bucket in each Region. Con gure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys
(SSE-S3). Con gure replication between the S3 buckets.
B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Con gure replication between the S3 buckets.
Con gure the application to use the KMS key with client-side encryption.
C. Create a customer managed KMS key and an S3 bucket in each Region. Con gure the S3 buckets to use server-side encryption with
Amazon S3 managed encryption keys (SSE-S3). Con gure replication between the S3 buckets.
D. Create a customer managed KMS key and an S3 bucket in each Region. Con gure the S3 buckets to use server-side encryption with AWS
KMS keys (SSE-KMS). Con gure replication between the S3 buckets.
Correct Answer:
C
Highly Voted
8 months, 3 weeks ago
Selected Answer: B
KMS Multi-region keys are required https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 33 times
2 weeks, 3 days ago
The answer is C, because "Server-side encryption with Amazon S3 managed keys (SSE-S3) is the base level of encryption configuration for every
bucket in Amazon S3. If you want to use a different type of default encryption, you can also specify server-side encryption with AWS Key
Management Service (AWS KMS) keys (SSE-KMS) or customer-provided keys (SSE-C)"
By using SSE-KMS, you can encrypt the data stored in the S3 buckets with a customer managed KMS key. This ensures that the data is protected
and allows you to have control over the encryption key. By creating an S3 bucket in each Region and configuring replication between them, you
can have data and key redundancy in both Regions.
upvoted 1 times
7 months, 1 week ago
Amazon S3 cross-region replication decrypts and re-encrypts data under a KMS key in the destination Region, even when replicating objects
protected by a multi-Region key. So stating that Amazon S3 cross-region replication decrypts and re-encrypts data under a KMS key in the
destination Region, even when replicating objects protected by a multi-Region key is required is incorrect
upvoted 3 times
2 months, 3 weeks ago
Option B involves configuring the application to use client-side encryption, which can increase the operational overhead of managing and
securing the keys.
upvoted 2 times
7 months ago
@magazz: it's not true then. Based on the document from AWS https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-
config-for-kms-objects.html , we will need to setup the replication rule with destination KMS. In order to have the key available in more than
2, then multi-region key should be required. But I'm still not favor option B - we can use server-side when why wasting effort to do client
side encryption.
upvoted 2 times
7 months ago
I would say it's true... Not sure the previous one say "not true" :D.
upvoted 1 times
6 months, 4 weeks ago
It's not clear what you are saying. Are you saying that B is correct or D is correct?
upvoted 2 times
5 months, 3 weeks ago
:D => is smile i thought
upvoted 1 times
Highly Voted
8 months, 2 weeks ago
Community vote distribution
B (56%)
D (43%)
Selected Answer: D
Cannot be A - question says customer managed key
Cannot B - client side encryption is operational overhead
Cannot C -as it says SSE-S3 instead of customer managed
so the answer is D though it required one time setup of keys
upvoted 29 times
8 months, 2 weeks ago
The data in both S3 buckets must be encrypted and decrypted with the same KMS key.
AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you
had the same key in multiple Regions.
"as though" means it's different.
So I agree with B
upvoted 5 times
8 months, 2 weeks ago
key change across regions unless you use multi-Region keys
upvoted 2 times
7 months, 1 week ago
How does client side encryption increase OPERATIONAL overhead? Do you think every connected client is sitting there with gpg cli,
decrypting/encrypting every packet that comes in/out? No, it's done via SDK -> https://docs.aws.amazon.com/encryption-sdk/latest/developer-
guide/introduction.html
The correct answer is B because that's the only way to actually get the same key across multiple regions with minimal operational overhead
upvoted 12 times
2 months, 3 weeks ago
"The data in both S3 buckets must be encrypted and decrypted with the same KMS key"
Client side encryption means that key is generated in from the cient without storing that in the KMS...
upvoted 2 times
8 months, 2 weeks ago
fun joke, if u dont do encryption on client side, where else could it be?
upvoted 1 times
7 months, 3 weeks ago
It could be server side. For client side, the application need to finish the encryption and decryption by itself. So S3 object encryption on the
server side is less operational overhead. https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html
But for option B, the major issue is if you create KMS keys in 2 regions, they can not be the same.
upvoted 2 times
7 months, 3 weeks ago
Sorry for the typo, I mean option D.
upvoted 2 times
Most Recent
9 hours, 55 minutes ago
Selected Answer: D
least OPERATIONAL overhead, not configuration overhead.
B: client side encryption
D: server side encryption
Therefor D should be correct
upvoted 1 times
5 days, 8 hours ago
Selected Answer: B
AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you had
the same key in multiple Regions. Each set of related multi-Region keys has the same key material and key ID, so you can encrypt data in one AWS
Region and decrypt it in a different AWS Region without re-encrypting or making a cross-Region call to AWS KMS.
B is correct
upvoted 1 times
1 week, 3 days ago
Selected Answer: D
Option A is not suitable because it does not utilize the AWS KMS customer managed key for encryption. SSE-S3 uses Amazon S3 managed
encryption keys, which are not aligned with the requirement of using a customer managed key.
Option B adds unnecessary complexity and overhead. Client-side encryption requires the application to handle the encryption and decryption
processes, which can increase the application's complexity and maintenance.
Option C does not provide consistency in encryption and decryption between the two S3 buckets. The requirement states that the data and the key
must be stored in each Region, which can be achieved more efficiently by using SSE-KMS with a single customer managed KMS key.
Therefore, option D is the recommended solution as it meets the requirements with the least operational overhead by using a customer managed
KMS key, SSE-KMS encryption, and S3 bucket replication between the two Regions.
upvoted 2 times
2 weeks, 2 days ago
Selected Answer: D
C is wrong.
D is right.
It is not because is "AWS Key Management Service (AWS KMS) customer managed key", that is client-side encryption (CSE).
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
If you want to use a customer managed key for SSE-KMS, create a symmetric encryption customer managed key before you configure SSE-KMS.
Then, when you configure SSE-KMS for your bucket, specify the existing customer managed key.
upvoted 1 times
2 weeks, 2 days ago
C is wrong, but on my main comment, I want to mean B is wrong.
upvoted 1 times
2 weeks, 3 days ago
Selected Answer: C
As it says in the AWS KMS "Server-side encryption with Amazon S3 managed keys (SSE-S3) is the base level of encryption configuration for every
bucket in Amazon S3. If you want to use a different type of default encryption, you can also specify server-side encryption with AWS Key
Management Service (AWS KMS) keys (SSE-KMS) or customer-provided keys (SSE-C)".
That means, you can use it, SSE-S3
upvoted 1 times
3 weeks, 3 days ago
Selected Answer: D
D exactly
upvoted 1 times
3 weeks, 4 days ago
The question says that you want to use the same KMS key for both S3 bucket, it can only be B, C indicates to create a key for each environment, it
does not fulfill what the question says.
upvoted 1 times
3 weeks, 5 days ago
Selected Answer: B
The correct answer shoud option B.
Here is a link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html
upvoted 1 times
4 weeks ago
Selected Answer: B
D is not correct simply because it's just not using customer managed keys. Many people prioritize the "operational overhead" over what the
scenario is actually asking.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
multi-region is required. all other options are eliminated because of this.
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
option D because in case of option C when i followed the steps, I didn't saw option to enable client side encryption when creating bucket
upvoted 1 times
1 month, 1 week ago
A, C - Wrong - Becuase it usese SSE-S3 instead of Customer Managed.
B - Wrong - client-side encryption is operational overhead.
D- Correct, because it uses Customer managed keys SSE-KMS. Also You need to explicitly enable the Replication for SSE-KMS option.
upvoted 1 times
1 month, 1 week ago
EXAMPLE:
Primary key: arn:aws:kms:us-east-1:111122223333:key/mrk-1234abcd12ab34cd56ef12345678990ab
Replica key: arn:aws:kms:eu-west-1:111122223333:key/mrk-1234abcd12ab34cd56ef12345678990ab
upvoted 1 times
1 month, 2 weeks ago
d:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
Option B is the correct solution.
This is the answer of ChatGPT "In this scenario, a customer-managed multi-Region KMS key should be created, which allows the company to
encrypt and decrypt data in both S3 buckets with the same key. By using a customer-managed key, the company can have greater control over key
management and security.
Creating an S3 bucket in each Region and configuring replication between them is also necessary to ensure that the data is accessible in both
Regions.
Finally, configuring the application to use the KMS key with client-side encryption provides end-to-end encryption and helps ensure that the data
is protected from unauthorized access."
upvoted 2 times
Topic 1
Question #37
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy
to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS
services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the EC2 serial console to directly access the terminal interface of each instance for administration.
B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a
remote SSH session.
C. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a
tunnel for administration of each instance.
D. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the
instances by using SSH keys across the VPN tunnel.
Correct Answer:
B
Highly Voted
8 months ago
Selected Answer: B
How can Session Manager benefit my organization?
Ans: No open inbound ports and no need to manage bastion hosts or SSH keys
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 14 times
7 months, 2 weeks ago
Do you know what from the question is it Windows or Linux EC2. I think not so how you want to do SSH session for Windows?
Answer is C
upvoted 1 times
5 days, 7 hours ago
"Cross-platform support for Windows, Linux, and macOS"
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 1 times
6 months, 4 weeks ago
Session Manager provides support for Windows, Linux, and macOS from a single tool
upvoted 4 times
Most Recent
5 days, 7 hours ago
Selected Answer: B
+Centralized access control to managed nodes using IAM policies
+No open inbound ports and no need to manage bastion hosts or SSH keys
+Cross-platform support for Windows, Linux, and macOS
upvoted 1 times
1 week, 3 days ago
Selected Answer: B
Option A provides direct access to the terminal interface of each instance, but it may not be practical for administration purposes and can be
cumbersome to manage, especially for multiple instances.
Option C adds operational overhead and introduces additional infrastructure that needs to be managed, monitored, and secured. It also requires
SSH key management and maintenance.
Option D is complex and may not be necessary for remote administration. It also requires administrators to connect from their local on-premises
machines, which adds complexity and potential security risks.
Therefore, option B is the recommended solution as it provides secure, auditable, and repeatable remote access using IAM roles and AWS Systems
Manager Session Manager, with minimal operational overhead.
upvoted 2 times
3 weeks, 5 days ago
Selected Answer: B
The choice for me is the option B.
Community vote distribution
B (93%)
7%
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
B is correct and has the least overhead.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
AWS Systems Manager Session Manager is a fully managed service that provides secure and auditable instance management without the need for
bastion hosts, VPNs, or SSH keys. It provides secure and auditable access to EC2 instances and eliminates the need for managing and securing SSH
keys.
upvoted 1 times
3 months ago
Selected Answer: B
I selected B) as "open inbound ports, maintain bastion hosts, or manage SSH keys" https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager.html However Session Manager comes with pretty robust list of prerequisites to put in place (SSM
Agent and connectivity to SSM endpoints). On the other side A) come with basically no prerequisites, but it is only for Linux and we do not have
info about OSs, so we should assume Windows as well.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: B
The keyword that makes option B follows the AWS Well-Architected Framework is "IAM role." IAM roles provide fine-grained access control and are
a recommended best practice in the AWS Well-Architected Framework. By attaching the appropriate IAM role to each instance and using AWS
Systems Manager Session Manager to establish a remote SSH session, the solution is using IAM roles to control access and follows a
recommended best practice.
upvoted 2 times
4 months, 3 weeks ago
Answer is B ~ Chat GPT
To meet the requirements with the least operational overhead, the company can use the AWS Systems Manager Session Manager. It is a native
AWS service that enables secure and auditable access to instances without the need for remote public IP addresses, inbound security group rules,
or Bastion hosts. With AWS Systems Manager Session Manager, the company can establish a secure and auditable session to the EC2 instances and
perform administrative tasks without the need for additional operational overhead.
upvoted 1 times
4 months, 3 weeks ago
Answer is B ~ (Chat GPT)
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to
access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS
services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
correct answer is B
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
Option B. Attaching the appropriate IAM role to each existing instance and new instance and using AWS Systems Manager Session Manager to
establish a remote SSH session would meet the requirements with the least operational overhead. This approach allows for secure remote access to
the instances without the need to manage additional infrastructure or maintain a separate connection to the instances. It also allows for the use of
native AWS services and follows the AWS Well-Architected Framework.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
https://dev.to/aws-builders/aws-systems-manager-session-manager-implementation-
f9a#:~:text=Session%20Manager%20is%20a%20fully%20managed%20AWS%20Systems,ports%2C%20maintain%20bastion%20hosts%2C%20or%2
0manage%20SSH%20keys.
upvoted 1 times
6 months ago
EC2 = IAM role
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
administer the instances remotely and securely:
EC2 serial console (option A) not intended for regular administration.
option B allows administrators to remotely access and administer the instances securely without the need for additional infrastructure or
maintenance.
option C requires additional infrastructure and maintenance
option D can be a complex and time-consuming process.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
The correct answer is B: Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager
to establish a remote SSH session.
To remotely and securely access and administer the Amazon EC2 instances in the company's AWS account, you should attach the appropriate IAM
role to each existing instance and new instance. This will allow the instances to access the required AWS services and resources. Then, you can use
AWS Systems Manager Session Manager to establish a remote SSH session to each instance.
upvoted 1 times
6 months, 1 week ago
AWS Systems Manager Session Manager is a native AWS service that allows you to remotely and securely access the command line interface of
your Amazon EC2 instances, on-premises servers, and virtual machines (VMs) running in other clouds, without the need to open inbound ports,
maintain bastion hosts, or manage SSH keys. With Session Manager, you can establish a secure, auditable connection to your instances using
the AWS Management Console, the AWS CLI, or the AWS SDKs.
Using the EC2 serial console to directly access the terminal interface of each instance for administration would not be a repeatable process and
would not follow the AWS Well-Architected Framework.
upvoted 2 times
6 months, 1 week ago
Creating an administrative SSH key pair and loading the public key into each EC2 instance would require you to manage and rotate the keys,
which would increase the operational overhead. Additionally, deploying a bastion host in a public subnet to provide a tunnel for
administration of each instance would also increase the operational overhead and potentially introduce security risks.
Establishing an AWS Site-to-Site VPN connection and instructing administrators to use their local on-premises machines to connect directly
to the instances using SSH keys across the VPN tunnel would also increase the operational overhead and potentially introduce security risks.
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B - AWS best practice for remote SSH access to EC2
upvoted 1 times
Topic 1
Question #38
A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from
around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?
A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point
to the IP addresses of the accelerators.
C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.
Correct Answer:
C
Highly Voted
1 week, 3 days ago
Selected Answer: C
Option A (replicating the S3 bucket to all AWS Regions) can be costly and complex, requiring replication of data across multiple Regions and
managing synchronization. It may not provide a significant latency improvement compared to the CloudFront solution.
Option B (provisioning accelerators in AWS Global Accelerator) can be more expensive as it adds an extra layer of infrastructure (accelerators) and
requires associating IP addresses with the S3 bucket. CloudFront already includes global edge locations and provides similar acceleration
capabilities.
Option D (enabling S3 Transfer Acceleration) can help improve upload speed to the S3 bucket but may not have a significant impact on reducing
latency for website visitors.
Therefore, option C is the most cost-effective solution as it leverages CloudFront's caching and global distribution capabilities to decrease latency
and improve website performance.
upvoted 7 times
Most Recent
5 days, 7 hours ago
Selected Answer: C
key words:
-around the world
-decrease latency
-most cost-effective
answer is C
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
C is the most cost effective.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world, providing low latency and high
transfer speeds to users accessing the content. Adding a CloudFront distribution in front of the S3 bucket will cache the static website's content at
edge locations around the world, decreasing latency for users accessing the website.
This solution is also cost-effective as it only charges for the data transfer and requests made by users accessing the content from the CloudFront
edge locations. Additionally, this solution provides scalability and reliability benefits as CloudFront can automatically scale to handle increased
demand and provide high availability for the website.
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
Cloud front
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: C
Community vote distribution
C (100%)
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML, CSS,
JavaScript, and images. It does this by placing cache servers in locations around the world, which store copies of the content and serve it to users
from the location that is nearest to them.
upvoted 1 times
4 months ago
My vote is: option B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use
AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
This question has 2 requirements:
1. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal
applications.
2. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: C
C. S3 accelerator is best for uploads to S3, whereas Cloudfront is for content delivery. S3 static website can be the origin which is distributed to
Cloudfront and routed by Route 53.
upvoted 3 times
4 months, 3 weeks ago
Selected Answer: C
Option C.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Option C. Adding an Amazon CloudFront distribution in front of the S3 bucket and editing the Route 53 entries to point to the CloudFront
distribution would meet the requirements most cost-effectively. CloudFront is a content delivery network (CDN) that speeds up the delivery of
static and dynamic web content by distributing it across a global network of edge locations. When a user accesses the website, CloudFront will
automatically route the request to the edge location that provides the lowest latency, reducing the time it takes for the content to be delivered to
the user. This solution also allows for easy integration with S3 and Route 53, and provides additional benefits such as DDoS protection and support
for custom SSL certificates.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
decrease latency and most cost-effective = cloudfront in front of S3 bucket (content can be served closer to the user, reducing latency). Replicating
S3 bucket and Global accelerator would also decrease latency but would be less cost-effective. Transfer accelerator wouldn't decrease latency since
it's not for delivering content, but for transfering it
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
The correct answer is C: Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront
distribution.
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML, CSS,
JavaScript, and images. It does this by placing cache servers in locations around the world, which store copies of the content and serve it to users
from the location that is nearest to them.
To decrease latency for users who access the static website hosted on Amazon S3, you can add an Amazon CloudFront distribution in front of the
S3 bucket and edit the Route 53 entries to point to the CloudFront distribution. This will allow CloudFront to cache the content of the website at
locations around the world, which will reduce the time it takes for users to access the website by serving it from the location that is nearest to
them.
upvoted 3 times
6 months, 1 week ago
Answer A, (WRONG) - Replicating the S3 bucket that contains the website to all AWS Regions and adding Route 53 geolocation routing entries
would be more expensive than using CloudFront, as it would require you to pay for the additional storage and data transfer costs associated
with replicating the bucket to multiple Regions.
Answer B, (WRONG) - Provisioning accelerators in AWS Global Accelerator and associating the supplied IP addresses with the S3 bucket would
also be more expensive than using CloudFront, as it would require you to pay for the additional cost of the accelerators.
Answer D, (WRONG) - Enabling S3 Transfer Acceleration on the bucket and editing the Route 53 entries to point to the new endpoint would not
reduce latency for users who access the website from around the world, as it only speeds up the transfer of large files over the public internet
and does not have cache servers in multiple locations around the world.
upvoted 5 times
6 months, 1 week ago
Selected Answer: C
Option C - Cloudfront is the right answer.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
CloudFront
upvoted 1 times
6 months, 4 weeks ago
Isn't Transfer Acceleration the same thing? I mean, what's the difference between C and D?
upvoted 1 times
6 months, 4 weeks ago
ok, I got the answer to this:
In short, Transfer Acceleration is for Writes and CloudFront is for Reads.
upvoted 8 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
ok CloudFront
upvoted 1 times
Topic 1
Question #39
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that
contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every
day through the company's website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage
performance is the problem.
Which solution addresses this performance issue?
A. Change the storage type to Provisioned IOPS SSD.
B. Change the DB instance to a memory optimized instance class.
C. Change the DB instance to a burstable performance instance class.
D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
Correct Answer:
B
Highly Voted
6 months, 1 week ago
Selected Answer: A
A: Made for high levels of I/O opps for consistent, predictable performance.
B: Can improve performance of insert opps, but it's a storage performance rather than processing power problem
C: for moderate CPU usage
D: for scale read-only replicas and doesn't improve performance of insert opps on the primary DB instance
upvoted 13 times
Highly Voted
1 week, 3 days ago
Selected Answer: A
Option B (changing the DB instance to a memory optimized instance class) focuses on improving memory capacity but may not directly address
the storage performance issue.
Option C (changing the DB instance to a burstable performance instance class) is suitable for workloads with varying usage patterns and burstable
performance needs, but it may not provide consistent and predictable performance for heavy write workloads.
Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is a solution for high availability and read scaling but
does not directly address the storage performance issue.
Therefore, option A is the most appropriate solution to address the performance issue by leveraging Provisioned IOPS SSD storage type, which
provides consistent and predictable I/O performance for the Amazon RDS for MySQL database.
upvoted 5 times
Most Recent
2 days, 17 hours ago
Selected Answer: A
need I/O
upvoted 1 times
1 week ago
A makes sense
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
A is correct answer
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
General purpose SSD is not optimal for database that requires high performance. Answer A is correct
upvoted 1 times
1 month, 1 week ago
A
Option B (changing the DB instance to a memory optimized instance class) focuses on increasing the available memory for the database, but it may
not directly address the storage performance issue.
Community vote distribution
A (96%)
4%
Option C (changing the DB instance to a burstable performance instance class) is not the optimal choice since burstable performance instances are
designed for workloads with bursty traffic patterns, and they may not provide the sustained performance needed for heavy update operations.
Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) is primarily used for high availability and read scaling
rather than addressing storage performance issues.
upvoted 2 times
1 month, 1 week ago
They're using General purpose SSD?? Provisioned IOPS SSD willl fix the performance issue described.
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
How is the answer B? A is blatantly the correct answer... provisioned IOPS SSD is a given faster choice.
upvoted 2 times
2 months, 1 week ago
Question said "The company has determined that the database storage performance is the problem."
So answer is A.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
General purpose SSD is not optimal for database that requires high performance. Answer is A
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
change my mind from B to A as this statement 'There are millions of updates against this data every day'.
upvoted 2 times
2 months, 4 weeks ago
Selected Answer: A
Provisioned IOPS SSD storage provides a guaranteed level of input/output operations per second (IOPS) that can help improve the performance of
write-intensive database workloads. This solution can be cost-effective since you only pay for the amount of storage and IOPS provisioned. The
performance of the storage will be stable, and it will provide predictable results.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
Provisioned IOPS SSD (io1) is a high-performance storage option that is designed for I/O-intensive workloads, such as databases that require a
high number of read and write operations per second. It allows you to provide a specific number of input/output operations per second (IOPS) for
your Amazon RDS for MySQL database instance, which can improve the performance of insert operations that require high levels of I/O.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
Change the storage type to Provisioned IOPS SSD would likely address the performance issue described.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: A
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance
EBS volumes designed for your critical, I/O intensive database applications.
These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require
extremely low latency."
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
5 months, 1 week ago
general puRpose SSD oes not fluent with Mysql
but provission IOPS SSD are more flexible with the Mysql
upvoted 2 times
Topic 1
Question #40
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A
solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional
infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally e cient solution that meets these requirements?
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Con gure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon S3 bucket. Set up an S3 Lifecycle con guration to transition data to Amazon S3 Glacier after 14 days.
B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a
script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle con guration to transition data to
Amazon S3 Glacier after 14 days.
C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Con gure the Kinesis Data Firehose stream to deliver the
alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon
Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14
days. Con gure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is
14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Definitely A, it's the most operationally efficient compared to D, which requires a lot of code and infrastructure to maintain. A is mostly managed
(firehose is fully managed and S3 lifecycles are also managed)
upvoted 27 times
6 months, 3 weeks ago
what about the 30 days minimum requirement to transition to S3 glacier?
upvoted 7 times
1 month, 1 week ago
GLACIER IS 7 DAYS REQUIREMENT NOT 30
upvoted 1 times
2 months, 2 weeks ago
This constraint is related to moving from Standard to IA/IA-One Zone only. Nothing to do with Glacier
upvoted 1 times
6 months, 2 weeks ago
You can directly migrate from S3 standard to glacier without waiting
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 3 times
1 month ago
the current article doesn't enable the current option, minimum days are 30
upvoted 1 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Only A makes sense operationally.
If you think D, just consider what is needed to move the message from SQS to S3... you are polling daily 14 TB to take out 1 TB... that's no
operationally efficient at all.
upvoted 11 times
Most Recent
1 week, 3 days ago
Selected Answer: A
B suggests launching EC2 instances to ingest and store the alerts, which introduces additional infrastructure management overhead and may not
be as cost-effective and scalable as using managed services like Kinesis Data Firehose and S3.
Community vote distribution
A (80%)
D (20%)
C involves delivering the alerts to an Amazon OpenSearch Service cluster and manually managing snapshots and data deletion. This introduces
additional complexity and manual overhead compared to the simpler solution of using Kinesis Data Firehose and S3.
D suggests using SQS to ingest the alerts, but it does not provide the same level of data persistence and durability as storing the alerts directly in
S3. Additionally, it requires manual processing and copying of messages to S3, which adds operational complexity.
Therefore, A provides the most operationally efficient solution that meets the company's requirements by leveraging Kinesis Data Firehose to
ingest the alerts, storing them in an S3 bucket, and using an S3 Lifecycle configuration to transition data to S3 Glacier for long-term archival, all
without the need for managing additional infrastructure.
upvoted 3 times
1 month, 1 week ago
Focus on keywords: Amazon Kinesis Data Firehose delivery stream to ingest the alerts. S3 Lifecycle configuration to transition data to Amazon S3
Glacier after 14 days.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: D
D is the correct answer. Check the link below
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
Amazon Kinesis Data Firehose is a fully managed service that can capture, transform, and deliver streaming data into storage systems or analytics
tools, making it an ideal solution for ingesting and storing status alerts. In this solution, the Kinesis Data Firehose delivery stream ingests the alerts
and delivers them to an S3 bucket, which is a cost-effective storage solution. An S3 Lifecycle configuration is set up to transition the data to
Amazon S3 Glacier after 14 days to minimize storage costs.
upvoted 2 times
3 months, 3 weeks ago
Selected Answer: A
The correct answer is A: Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to
deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
upvoted 1 times
4 months, 2 weeks ago
This question was tricky but after some reading my choice went from D to A. Which is Operationally efficient.
upvoted 1 times
5 months, 2 weeks ago
A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to
an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
This solution meets the company's requirements to minimize costs and not manage additional infrastructure while providing high availability.
Kinesis Data Firehose is a fully managed service that can automatically ingest streaming data and load it into Amazon S3, Amazon Redshift, or
Amazon Elasticsearch Service. By configuring the Firehose to deliver the alerts to an S3 bucket, the company can take advantage of S3's high
durability and availability. An S3 Lifecycle configuration can be set up to automatically transition data that is older than 14 days to Amazon S3
Glacier, an extremely low-cost storage class for infrequently accessed data.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: A
Creating an Amazon Kinesis Data Firehose delivery stream to ingest the alerts and configuring it to deliver the alerts to an Amazon S3 bucket is the
most operationally efficient solution that meets the requirements. Kinesis Data Firehose is a fully managed service for delivering real-time
streaming data to destinations such as S3, Redshift, Elasticsearch Service, and Splunk. It can automatically scale to handle the volume and
throughput of the alerts, and it can also batch, compress, and encrypt the data as it is delivered to S3. By configuring a Lifecycle policy on the S3
bucket, the company can automatically transition data to Amazon S3 Glacier after 14 days, allowing the company to store the data for longer
periods of time at a lower cost. This solution requires minimal management and provides high availability, making it the most operationally
efficient choice.
upvoted 2 times
6 months ago
Selected Answer: D
A is not a right answer is Kinesis Firehose is not the right service to Ingest small 2KB events. Minimum Message Size for Kinesis Firehose is 5MB.
Kinesis Data Stream is the right service for this but as that is not given as option here, SQS with 14 Day retention is right answer.
upvoted 2 times
6 months ago
"A record can be as large as 1,000 KB." and the diagrams shown in this URL support A as the answer.
https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
upvoted 1 times
6 months ago
Option A:
Thinking about this a more as Low operational overhead primary requirement option A will be better option but it will have higher Latency
compared to using Kinesis Data Stream.
upvoted 1 times
6 months ago
any data older than 14 days => can not D ! => A correct.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
A, MOST operationally efficient solution = Kinesis Data Firehose, since it's a fully managed solution
B, more costly and more opp overhead compared to kinesis data firehose
C, not most cost-effective solution since it's data that's not actively being queried or analyzed after 14 days
D, designed for messaging rather than storage
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
The correct answer is A: Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to
deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
Amazon Kinesis Data Firehose is a fully managed service that makes it easy to load streaming data into data stores and analytics tools. It can
continuously capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling
real-time analytics with existing business intelligence tools and dashboards you're already using.
upvoted 1 times
6 months, 1 week ago
To meet the requirements of the company, you can create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts generated by
the edge devices. You can then configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. This will provide a
highly available solution that does not require the company to manage additional infrastructure.
To keep 14 days of data available for immediate analysis and archive any data older than 14 days, you can set up an S3 Lifecycle configuration
to transition data to Amazon S3 Glacier after 14 days. This will allow the company to store the data for long-term retention at a lower cost than
storing it in S3.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
A of course
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
D as B is client-side encryption
upvoted 2 times
Topic 1
Question #41
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2
instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the
data also sends a noti cation to the user when an upload is complete. The company has noticed slow application performance and wants to
improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Auto Scaling group so that EC2 instances can scale out. Con gure an S3 event noti cation to send events to an Amazon Simple
Noti cation Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
B. Create an Amazon AppFlow ow to transfer data between each SaaS source and the S3 bucket. Con gure an S3 event noti cation to send
events to an Amazon Simple Noti cation Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Con gure the S3 bucket as the
rule's target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Con gure
an Amazon Simple Noti cation Service (Amazon SNS) topic as the second rule's target.
D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service
(Amazon ECS). Con gure Amazon CloudWatch Container Insights to send events to an Amazon Simple Noti cation Service (Amazon SNS)
topic when the upload to the S3 bucket is complete.
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
This question just screams AppFlow (SaaS integration)
https://aws.amazon.com/appflow/
upvoted 15 times
8 months, 1 week ago
configuring Auto-Scaling also takes time when compared to AppFlow,
in AWS's words "in just a few clicks"
> Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS)
applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks
upvoted 9 times
Highly Voted
2 months, 4 weeks ago
Selected Answer: A
It says "LEAST operational overhead" (ie do it in a way it's the less work for me).
If you know a little Amazon AppFlow (see the some videos) you'll see you'll need time to configure and test it, and at the end cope with the errors
during the extraction and load the info to the target.
The customer in the example ALREADY has some EC2 that do the work, the only problem is the performance, that WILL be improved scaling out
and adding a queue (SNS) to decouple the work of notify the user.
The operational load of doing this is LESS that configuring AppFlow.
upvoted 6 times
Most Recent
1 week, 3 days ago
Selected Answer: B
Option A suggests using an Auto Scaling group to scale out EC2 instances, but it does not address the potential bottleneck of slow application
performance and the notification process.
Option C involves using Amazon EventBridge (CloudWatch Events) rules for data output and S3 uploads, but it introduces additional complexity
with separate rules and does not specifically address the slow application performance.
Option D suggests containerizing the application and using Amazon Elastic Container Service (Amazon ECS) with CloudWatch Container Insights,
which may involve more operational overhead and setup compared to the simpler solution provided by Amazon AppFlow.
Therefore, option B offers the most streamlined solution with the least operational overhead by utilizing Amazon AppFlow for data transfer,
configuring S3 event notifications for upload completion, and leveraging Amazon SNS for notifications without requiring additional infrastructure
management.
upvoted 3 times
1 month, 1 week ago
So true, This question just screams AppFlow (Saas integration)
Community vote distribution
B (80%)
A (20%)
upvoted 1 times
1 month, 4 weeks ago
With Amazon AppFlow automate bi-directional data flows between SaaS applications and AWS services in just a few clicks.So B is the right answer
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Amazon AppFlow is a fully-managed integration service that enables you to securely exchange data between software as a service (SaaS)
applications, such as Salesforce, and AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift.
The use of Appflow helps to remove the ec2 as the middle layer which slows down the process of data transmission and introduce an additional
variable.
Appflow is also a fully managed AWS service, thus reducing the operational overhead.
https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: B
Keywords:
SaaS --> AppFlow
Operational overhead (B) vs configuration overhead (A)
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
AppFlow is for SaaS integrations:
https://aws.amazon.com/appflow/
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
Amazon AppFlow is a fully managed integration service that can help transfer data between SaaS applications and S3 buckets, making it an ideal
solution for data collection from multiple sources. By using Amazon AppFlow, the company can remove the burden of creating and maintaining
custom integrations, allowing them to focus on the core of their application. Additionally, by using S3 event notifications to trigger an Amazon SNS
topic, the company can improve notification delivery times by removing the dependency on the EC2 instances.
upvoted 2 times
5 months, 1 week ago
Selected Answer: A
This solution allows the EC2 instances to scale out as needed to handle the data processing and uploading, which will improve performance.
Additionally, by configuring an S3 event notification to send a notification to an SNS topic when the upload is complete, the company can still
receive the necessary notifications, but it eliminates the need for the same EC2 instance that is processing and uploading the data to also send the
notifications, which further improves performance. This solution has less operational overhead as it only requires configuring S3 event notifications,
SNS topic and AutoScaling group.
upvoted 4 times
5 months, 3 weeks ago
Selected Answer: B
Amazon AppFlow is a fully managed integration service that enables the secure and easy transfer of data between popular software-as-a-service
(SaaS) applications and AWS services. By using AppFlow, the company can easily set up integrations between SaaS sources and the S3 bucket, and
the service will automatically handle the data transfer and transformation. The S3 event notification can then be used to send a notification to the
user when the upload is complete, without the need to manage additional infrastructure or code. This solution would provide the required
performance improvement and require minimal management, making it the most operationally efficient choice.
upvoted 4 times
5 months, 3 weeks ago
Selected Answer: B
Appflow only
upvoted 1 times
6 months ago
Selected Answer: B
To meet the requirements with the least operational overhead, the company could consider the following solution:
Option B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to
send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Amazon AppFlow is a fully managed service that enables you to easily and securely transfer data between your SaaS applications and Amazon S3.
By creating an AppFlow flow to transfer the data between the SaaS sources and the S3 bucket, the company can improve the performance of the
application by offloading the data transfer process to a managed service.
upvoted 4 times
6 months ago
***INCORRECT ANSWERS***
Option A is incorrect because creating an Auto Scaling group and configuring an S3 event notification does not address the root cause of the
slow application performance, which is related to the data transfer process.
Option C is incorrect because creating multiple EventBridge (CloudWatch Events) rules and configuring them to send events to an SNS topic is
more complex and involves additional operational overhead.
Option D is incorrect because creating a Docker container and hosting it on ECS does not address the root cause of the slow application
performance, which is related to the data transfer process.
upvoted 6 times
6 months, 1 week ago
Selected Answer: B
B, AppFlow is a fuly managed integration service that automatically handles data transfer and transformation, so it's the one that requires the least
opp overhead
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B. App Flow usecase
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
AppFlow = managed service SAAS
upvoted 2 times
6 months, 2 weeks ago
AppFlow = managed service SAAS
upvoted 1 times
Topic 1
Question #42
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several
subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images
from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?
A. Launch the NAT gateway in each Availability Zone.
B. Replace the NAT gateway with a NAT instance.
C. Deploy a gateway VPC endpoint for Amazon S3.
D. Provision an EC2 Dedicated Host to run the EC2 instances.
Correct Answer:
C
Highly Voted
5 months, 3 weeks ago
Selected Answer: C
Deploying a gateway VPC endpoint for Amazon S3 is the most cost-effective way for the company to avoid Regional data transfer charges. A
gateway VPC endpoint is a network gateway that allows communication between instances in a VPC and a service, such as Amazon S3, without
requiring an Internet gateway or a NAT device. Data transfer between the VPC and the service through a gateway VPC endpoint is free of charge,
while data transfer between the VPC and the Internet through an Internet gateway or NAT device is subject to data transfer charges. By using a
gateway VPC endpoint, the company can reduce its data transfer costs by eliminating the need to transfer data through the NAT gateway to access
Amazon S3. This option would provide the required connectivity to Amazon S3 and minimize data transfer charges.
upvoted 21 times
3 weeks, 4 days ago
Very good explanation!
upvoted 2 times
Most Recent
1 week ago
By deploying a gateway VPC endpoint for S3, the company can establish a direct connection between their VPC and S3 without going through the
internet gateway or NAT gateway. This enables traffic between the EC2 and S3 to stay within the Amazon network, avoiding Regional data transfer
charges.
A suggests launching the NAT gateway in each AZ. While this can help with availability and redundancy, it does not address the issue of data
transfer charges, as the traffic would still traverse the NAT gateways and incur data transfer fees.
B suggests replacing the NAT gateway with a NAT instance. However, this solution still involves transferring data between the instances and S3
through the NAT instance, which would result in data transfer charges.
D suggests provisioning an EC2 Dedicated Host to run the EC2. While this can provide dedicated hardware for the instances, it does not directly
address the issue of data transfer charges.
upvoted 1 times
3 weeks, 4 days ago
Selected Answer: C
Option C is the answer.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: C
A gateway VPC endpoint is a fully managed service that allows connectivity from a VPC to AWS services such as S3 without the need for a NAT
gateway or a public internet gateway. By deploying a Gateway VPC endpoint for Amazon S3, the company can ensure that all S3 traffic remains
within the VPC and does not cross the regional boundary. This eliminates regional data transfer charges and provides a more cost-effective
solution for the company.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: C
C - gateway VPC endpoint.
upvoted 1 times
6 months ago
'Regional' data transfer isn't clear but I think we have to assume this means the traffic stays in the region.
The two options that seem possible are NAT gateway per AZ vs privatelink gateway endpoints per AZ.
privatelink/endpoints do have costs (url below)
Community vote distribution
C (98%)
privatelink endpoint / LB costs look lower than NAT gateway costs
privatelink doesn't incur inter-AZ data transfer charges (if in the same region) as NAT gateways do which goes towards the key requirement stated
good writeup here : https://www.vantage.sh/blog/nat-gateway-vpc-endpoint-savings
https://aws.amazon.com/privatelink/pricing/
https://aws.amazon.com/vpc/pricing/
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-gateway-transfer-costs/
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C, privately connects vpc to aws services via privatelink. Doesn't require nat gateway, vpn or direct connect. Data doesn't leave amazon network so
there are no data transfer charges
A, used to enable instances in private subnets to connect to internet or aws services, data transfered is charged
B, similar to nat gateway
D, not related to data transfer
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
Option C (correct). Deploy a gateway VPC endpoint for Amazon S3.
A VPC endpoint for Amazon S3 allows you to access Amazon S3 resources within your VPC without using the Internet or a NAT gateway. This
means that data transfer between your EC2 instances and S3 will not incur Regional data transfer charges.
Option A (wrong), launching a NAT gateway in each Availability Zone, would not avoid data transfer charges because the NAT gateway would still
be used to access S3.
Option B (wrong), replacing the NAT gateway with a NAT instance, would also not avoid data transfer charges as it would still require using the
Internet or a NAT gateway to access S3.
Option D (wrong), provisioning an EC2 Dedicated Host, would not affect data transfer charges as it only pertains to the physical host that the EC2
instances are running on and not the data transfer charges for accessing.
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
VPC endpoint
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 3 weeks ago
Option is C bcz Gateway endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT
device for your VPC. Gateway endpoints do not enable AWS PrivateLink. There is no additional charge for using gateway endpoints
upvoted 2 times
7 months, 1 week ago
C is correct
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 1 times
7 months, 1 week ago
C is Correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
This link clearly states that "VPC gateway endpoints allow communication to Amazon S3 and Amazon DynamoDB without incurring data transfer
charges within the same Region". On the other hand NAT gateway incurs additional data processing charges. Hence, C is the correct answer.
https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/
upvoted 4 times
7 months, 3 weeks ago
Selected Answer: A
Why not A?
upvoted 1 times
7 months ago
using the NAT gateway you will be charge for data transfer out. When VPC gateway endpoint in place for S3, the service will use internal route
inside AWS to send data to S3 -> no charge at all.
upvoted 2 times
8 months ago
Selected Answer: C
C is the answer
upvoted 4 times
8 months, 1 week ago
If we deploy VPC Gateway Endpoint then data will be transfer through AWS network only.
upvoted 2 times
7 months, 3 weeks ago
Though will it not incur regional data transfer cost ? Here the question is to avoid regional data transfer costs
upvoted 1 times
7 months, 1 week ago
Here it also says "The company is concerned about data transfer charges". They just want to reduce costs hence it is C.
upvoted 2 times
Topic 1
Question #43
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application
has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that
allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
A. Establish AWS VPN connections and proxy all tra c through a VPC gateway endpoint.
B. Establish a new AWS Direct Connect connection and direct backup tra c through this new connection.
C. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
D. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
A: VPN also goes through the internet and uses the bandwidth
C: daily Snowball transfer is not really a long-term solution when it comes to cost and efficiency
D: S3 limits don't change anything here
So the answer is B
upvoted 23 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
Option B (correct). Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your on-premises data center to AWS.
This connection bypasses the public Internet and can provide more reliable, lower-latency communication between your on-premises application
and Amazon S3. By directing backup traffic through the AWS Direct Connect connection, you can minimize the impact on your internet bandwidth
and ensure timely backups to S3.
upvoted 11 times
6 months, 1 week ago
Option A (wrong), establishing AWS VPN connections and proxying all traffic through a VPC gateway endpoint, would not necessarily minimize
the impact on internet bandwidth as it would still utilize the public Internet to access S3.
Option C (wrong), using AWS Snowball devices, would not address the issue of internet bandwidth limitations as the data would still need to be
transferred over the Internet to and from the Snowball devices.
Option D (wrong), submitting a support ticket to request the removal of S3 service limits, would not address the issue of internet bandwidth
limitations and would not ensure timely backups to S3.
upvoted 4 times
4 months ago
Option C is wrong so is your reason. you do not need internet to load data into Snowball Devices. if you are using snow cone for example, u
will connect it to your on-premises device directly for loading and Aws will load it in the cloud. However, it not effective to do that everyday ,
hence option B is the better choice.
upvoted 1 times
4 months ago
You're right Option B is the correct answer. I answered Option B as the correct answer above.
upvoted 1 times
Most Recent
1 week ago
Selected Answer: B
AWS Direct Connect provides a dedicated network connection between on-premises and AWS, bypassing public internet. By establishing this
connection for backup traffic, company can ensure fast and reliable transfers between their on-premises and S3 without impacting their internet
connectivity for internal users. This provides a dedicated and high-speed connection that is well-suited for data transfers and minimizes impact on
internet bandwidth limitations.
While option A can provide a secure connection, it still utilizes internet bandwidth for data transfer and may not effectively address issue of limited
bandwidth.
While option C can work for occasional large data transfers, it may not be suitable for frequent backups and can introduce additional operational
Community vote distribution
B (98%)
overhead.
D, submitting a support ticket to request removal of S3 service limits, does not address issue of internet bandwidth limitations and is not a relevant
solution for given requirements.
upvoted 2 times
1 week ago
Galleta siempre veo tus comentarios! sos crack!
upvoted 1 times
3 weeks, 4 days ago
Selected Answer: B
Option B meets these requirements.
upvoted 1 times
1 month, 1 week ago
This question can confuse you as it mentions internet and Direct Connect bypasses internet and uses dedicated network connections. So don't be
fooled - keyword in the question is "minimize the impact internet bandwidth for internal users"
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
AWS Direct Connect is a dedicated network connection that provides a more reliable and consistent network experience compared to internet-
based connections. By establishing a new Direct Connect connection, the company can dedicate a portion of its network bandwidth to transferring
data to Amazon S3, ensuring timely backups while minimizing the impact on internal users.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
Establishing a new AWS Direct Connect connection and directing backup traffic through this new connection would meet these requirements. AWS
Direct Connect is a network service that provides dedicated network connections from on-premises data centers to AWS. It allows the company to
bypass the public Internet and establish a direct connection to AWS, providing a more reliable and lower-latency connection for data transfer. By
directing backup traffic through the Direct Connect connection, the company can reduce the impact on internet connectivity for internal users and
improve the speed of backups to Amazon S3. This solution would provide a long-term solution for timely backups with minimal impact on internet
connectivity.
upvoted 4 times
5 months, 3 weeks ago
Only B and C are the correct choices here, and C is more costly than B, so B is the correct answer
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
I thought Direct Connect was or is used to connect directly to AWS from on premise machines and USERs are mentioned which means they might
have users which are not on premise and need connecions.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
B, low-latency, dedicated network connections bw on-premises data center and AWS cloud. Directing backup traffic through direct connect would
increase bandwidth and lower latency.
A, doesn't specifically address the needs of the backup traffic.
C, useful for transfering large amounts of data in short periods of time, not for ongoing backups
D, doesn't directly address the bandwidth contraints
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
7 months, 1 week ago
B is Correct
upvoted 1 times
8 months ago
Selected Answer: B
B is the answer
upvoted 4 times
8 months, 2 weeks ago
AWS Direct Connect and AWS Snowball Edge are primarily classified as "Cloud Dedicated Network Connection" and "Data Transfer" tools
respectively.
Even if we say it takes 1/5th of cost for transfer of 250 TB data from on-prem to AWS in a week.
upvoted 1 times
8 months, 2 weeks ago
Direct Connect vs Snowball
upvoted 1 times
8 months, 2 weeks ago
B.
The keyword here is long term solution.
Direct connect is a dedicated connection between on-prem and AWS, this is the way to ensure stable network connectivity that will not wax and
wane like internet connectivity.
upvoted 3 times
8 months, 2 weeks ago
The answer is B
upvoted 1 times
Topic 1
Question #44
A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Enable versioning on the S3 bucket.
B. Enable MFA Delete on the S3 bucket.
C. Create a bucket policy on the S3 bucket.
D. Enable default encryption on the S3 bucket.
E. Create a lifecycle policy for the objects in the S3 bucket.
Correct Answer:
BD
Highly Voted
8 months, 3 weeks ago
Selected Answer: AB
The correct solution is AB, as you can see here:
https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/
It states the following:
To prevent or mitigate future accidental deletions, consider the following features:
Enable versioning to keep historical versions of an object.
Enable Cross-Region Replication of objects.
Enable MFA delete to require multi-factor authentication (MFA) when deleting an object version.
upvoted 39 times
Most Recent
1 week ago
Selected Answer: AB
Enabling versioning on S3 ensures multiple versions of object are stored in bucket. When object is updated or deleted, new version is created,
preserving previous version.
Enabling MFA Delete adds additional layer of protection by requiring MFA device to be present when attempting to delete objects. This helps
prevent accidental or unauthorized deletions by requiring extra level of authentication.
C. Creating a bucket policy on S3 is more focused on defining access control and permissions for bucket and its objects, rather than protecting
against accidental deletion.
D. Enabling default encryption on S3 ensures that any new objects uploaded to bucket are automatically encrypted. While encryption is important
for data security, it does not directly address accidental deletion.
E. Creating lifecycle policy for objects in S3 allows for automated management of objects based on predefined rules. While this can help with data
retention and storage cost optimization, it does not directly protect against accidental deletion.
upvoted 3 times
3 weeks, 4 days ago
Selected Answer: AB
options A & B meet these requirements, hence A and B are the right answers.
upvoted 1 times
1 month ago
Selected Answer: AB
The correct solution is AB
upvoted 1 times
1 month, 3 weeks ago
Admin out here trying to get people to fail lol.
A and B, folks. If somehow this presents as a question needing only one answer, MFA dlete is your go to.
upvoted 1 times
2 months, 1 week ago
Selected Answer: AB
Had this question on TD exam... A and B. Period.
Community vote distribution
AB (100%)
upvoted 1 times
2 months, 2 weeks ago
A and B seems are the good ones to me but couldnt I create policy to block all deletes and allow Put/Get, etc.?
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: AB
A+B will solve the problem
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: AB
Policies and encryption do not affect delete protection
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: AB
A. Enable versioning on the S3 bucket. Versioning allows multiple versions of an object to be stored in the same bucket. When versioning is
enabled, every object uploaded to the bucket is automatically assigned a unique version ID. This provides protection against accidental deletion or
modification of objects.
B. Enable MFA Delete on the S3 bucket. MFA Delete requires the use of a multi-factor authentication (MFA) device to permanently delete an object
or suspend versioning on a bucket. This provides an additional layer of protection against accidental or malicious deletion of objects.
upvoted 1 times
3 months, 4 weeks ago
There no need to add default S3 encryption this is alrady enabled
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon
S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on
performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS
CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS Command
Line Interface and AWS SDKs
upvoted 1 times
4 months ago
Selected Answer: AB
A & B together solve this problem
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: AB
Enabling versioning on the S3 bucket and enabling MFA Delete on the S3 bucket will help protect the data from accidental deletion.
Versioning allows the company to store multiple versions of an object in the same bucket. When versioning is enabled, S3 automatically archives all
versions of an object (including all writes and deletes) in the bucket. This means that if an object is accidentally deleted, it can be recovered by
restoring an earlier version of the object.
MFA Delete adds an extra layer of protection by requiring users to provide additional authentication (through an MFA device) before they can
permanently delete an object version. This helps prevent accidental or malicious deletion of objects by requiring users to confirm their intent to
delete.
By using both versioning and MFA Delete, the company can protect the data in the S3 bucket from accidental deletion and provide a way to
recover deleted objects if necessary.
upvoted 1 times
6 months ago
As per white paper - "versioning" is one of the answer
https://d0.awsstatic.com/whitepapers/protecting-s3-against-object-deletion.pdf
upvoted 1 times
6 months, 1 week ago
Selected Answer: AB
A, versioning is a way to protect buckets from accidental deletions
B, MFA is a way to protect bucket from accidental deletions
upvoted 2 times
6 months, 1 week ago
Selected Answer: AB
***CORRECT***
A. Enable versioning on the S3 bucket.
B. Enable MFA Delete on the S3 bucket.
Enabling versioning on an S3 bucket allows you to store multiple versions of an object in the same bucket. This means that you can recover an
object that was accidentally deleted or overwritten. When versioning is enabled, deleted objects are not permanently deleted, but are instead
marked as deleted and stored as a new version of the object.
Enabling MFA (Multi-Factor Authentication) Delete on an S3 bucket adds an additional layer of security by requiring that you provide a valid MFA
code before permanently deleting an object version. This can help prevent the accidental deletion of objects in the bucket.
upvoted 3 times
6 months, 1 week ago
***WRONG***
Option C, creating a bucket policy, would not directly protect the data from accidental deletion.
Option D, enabling default encryption, would help protect the data from unauthorized access but would not prevent accidental deletion.
Option E, creating a lifecycle policy, can be used to automate the deletion of objects based on specified criteria, but would not prevent
accidental deletion in this case.
upvoted 3 times
6 months, 1 week ago
Selected Answer: AB
A and B
upvoted 1 times
Topic 1
Question #45
A company has a data ingestion work ow that consists of the following:
• An Amazon Simple Noti cation Service (Amazon SNS) topic for noti cations about new data deliveries
• An AWS Lambda function to process the data and record metadata
The company observes that the ingestion work ow fails occasionally because of network connectivity issues. When such a failure occurs, the
Lambda function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.)
A. Deploy the Lambda function in multiple Availability Zones.
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
C. Increase the CPU and memory that are allocated to the Lambda function.
D. Increase provisioned throughput for the Lambda function.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
Correct Answer:
BE
Highly Voted
8 months, 2 weeks ago
A, C, D options are out, since Lambda is fully managed service which provides high availability and scalability by its own
Answers are B and E
upvoted 16 times
3 months, 2 weeks ago
There are times you do have to increase lambda memory for improved performance though. But not in this case.
upvoted 3 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: BE
BE so that the lambda function reads the SQS queue and nothing gets lost
upvoted 7 times
Most Recent
1 week ago
Selected Answer: BE
A. Deploying the Lambda function in multiple Availability Zones improves availability and fault tolerance but does not guarantee ingestion of all
data.
C. Increasing CPU and memory allocated to the Lambda function may improve its performance but does not address the issue of connectivity
failures.
D. Increasing provisioned throughput for the Lambda function is not applicable as Lambda functions are automatically scaled by AWS and
provisioned throughput is not configurable.
Therefore, the correct combination of actions to ensure that the Lambda function ingests all data in the future is to create an SQS queue and
subscribe it to the SNS topic (option B) and modify the Lambda function to read from the SQS queue (option E).
upvoted 2 times
3 weeks, 4 days ago
Selected Answer: BE
The combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future, are by Creating an
Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic, and Modifying the Lambda function to read from an
Amazon Simple Queue Service (Amazon SQS) queue
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: BE
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic. This will decouple the ingestion workflow and
provide a buffer to temporarily store the data in case of network connectivity issues.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue. This will allow the Lambda function to process
the data from the SQS queue at its own pace, decoupling the data ingestion from the data delivery and providing more flexibility and fault
tolerance.
upvoted 1 times
Community vote distribution
BE (96%)
4%
4 months, 2 weeks ago
Help
Can SQS Queue have multiple consumers so SNS and Lambda can consume at the same time?
upvoted 1 times
5 months ago
How come no one’s acknowledged the connection issue? Obviously we know we need SQS as a buffer for messages when the system fails. But
shouldn’t we consider provisioned iops to handle the the connectivity so maybe it will be less likely to lose connectivity and fail in the first place?
upvoted 2 times
4 months, 3 weeks ago
What does connectivity have to do with Provisioned IOPS which is supposed to enhance I/O rate?
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: BE
To ensure that the Lambda function ingests all data in the future, the solutions architect can create an Amazon Simple Queue Service (Amazon
SQS) queue and subscribe it to the SNS topic. This will allow the data notifications to be queued in the event of a network connectivity issue, rather
than being lost. The solutions architect can then modify the Lambda function to read from the SQS queue, rather than from the SNS topic directly.
This will allow the Lambda function to process any queued data as soon as the network connectivity issue is resolved, without the need for manual
intervention.
By using an SQS queue as a buffer between the SNS topic and the Lambda function, the company can improve the reliability and resilience of the
ingestion workflow. This approach will help ensure that the Lambda function ingests all data in the future, even when there are network
connectivity issues.
upvoted 3 times
6 months, 1 week ago
Selected Answer: BE
B and E, allow the data to be queued up in the event of a failure, rather than being lost, then by reading from the queue, the Lambda function will
be able to process the data
A, improves reliability but doesn't ensure all data is ingested
C and D, they improve performance but not ensure all data is ingested
upvoted 2 times
6 months, 1 week ago
Selected Answer: BE
***CORRECT***
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
An Amazon Simple Queue Service (SQS) queue can be used to decouple the data ingestion workflow and provide a buffer for data deliveries. By
subscribing the SQS queue to the SNS topic, you can ensure that notifications about new data deliveries are sent to the queue even if the Lambda
function is unavailable or experiencing connectivity issues. When the Lambda function is ready to process the data, it can read from the SQS queue
and process the data in the order in which it was received.
upvoted 2 times
6 months, 1 week ago
***WRONG***
Option A, deploying the Lambda function in multiple Availability Zones, would not directly address the issue of connectivity failures.
Option C, increasing the CPU and memory that are allocated to the Lambda function, would not directly address the issue of connectivity
failures. Option D, increasing provisioned throughput for the Lambda function, would not directly address the issue of connectivity failures.
upvoted 2 times
6 months, 1 week ago
Selected Answer: BE
B and E
upvoted 1 times
7 months, 1 week ago
B and E
upvoted 1 times
8 months, 1 week ago
Selected Answer: BE
B and E is the obvious answer here,
SQS ensures that message does not get lost
upvoted 4 times
8 months, 1 week ago
Selected Answer: AB
Why not AB
upvoted 1 times
8 months, 1 week ago
lambda is serverless, it does not need to be multi-AZ..
upvoted 1 times
Topic 1
Question #46
A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The
stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of
the les can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded les that contain personally identi able information (PII) that should not
have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?
A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger
an S3 Lifecycle policy to remove the objects that contain PII.
B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use
Amazon Simple Noti cation Service (Amazon SNS) to trigger a noti cation to the administrators to remove the objects that contain PII.
C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Noti cation Service (Amazon SNS) to trigger a noti cation to the administrators to remove the objects
that contain PII.
D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If
objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a noti cation to the administrators and trigger an S3 Lifecycle
policy to remove the meats that contain PII.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
I have a problem with answer B. The question says: "automate remediation". B says that you inform the administrator and he removes the data
manually, that's not automating remediation. Very weird, that would mean that D is correct - but it's so much harder to implement.
upvoted 20 times
2 months, 1 week ago
the problem is... you'd have to write lambda to detect PII? AWS has a product for that and we know that's Macie
upvoted 2 times
2 months ago
Macie has file size limit and clearly question mentions 200GB filesizes are possible. Lambda is the way to go ..
upvoted 2 times
2 months ago
"remediation" not necessarily means "deletions". Since the question stated "The company wants administrators to be alerted ", I believe, in this
case remediation can mean having automation to alert the administrator for every hit
upvoted 3 times
5 months, 3 weeks ago
Pay attention to the entire question as in What should a solutions architect do to meet these requirements with the LEAST development effort?
That is why Macie is used. Answer is B
upvoted 4 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: B
Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect your
sensitive data
upvoted 10 times
7 months, 2 weeks ago
Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names,
addresses, and credit card numbers. It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3
upvoted 8 times
Most Recent
1 week ago
Selected Answer: B
Amazon Macie is a managed service that automatically discovers, classifies, and protects sensitive data such as PII in AWS. By enabling Macie on S3,
it can scan the uploaded objects for PII.
A. Using Amazon Inspector to scan the objects in S3 is not the optimal choice for scanning PII data. Amazon Inspector is designed for host-level
Community vote distribution
B (62%)
D (38%)
vulnerability assessment rather than content scanning.
C. Implementing custom scanning algorithms in an AWS Lambda function would require significant development effort to handle scanning large
files.
D. Using SES for notification and triggering S3 Lifecycle policy may add unnecessary complexity to the solution.
Therefore, the best option that meets the requirements with the least development effort is to use an S3 as a secure transfer point, utilize Amazon
Macie for PII scanning, and trigger an SNS notification to the administrators (option B).
upvoted 2 times
1 week, 1 day ago
Selected Answer: D
I agree to who is saying that Macie do that, but 200gb of filesize and "The company also wants to automate remediation" are only meet with D
answer, no other possibility is correct
upvoted 1 times
1 week, 2 days ago
Selected Answer: D
Macie sounds good, but the files are too large
upvoted 1 times
4 weeks ago
Selected Answer: B
The company wants administrators to be alerted-> Eliminate A
Requirements with the LEAST development effort-> Eliminate C,D.
upvoted 1 times
1 month, 1 week ago
Lambda has a memory limit of 10GB. None of the options here would work.
upvoted 1 times
1 month, 4 weeks ago
Macie is the key for machine learning ad to identify PII
upvoted 1 times
2 months ago
Selected Answer: B
B sounds most logical.
upvoted 1 times
2 months ago
Selected Answer: D
file size can exceed 200 GB which exceeds Macie quotas and it scan the file:
https://docs.aws.amazon.com/macie/latest/user/macie-quotas.html
Size of an individual file to analyze:
Adobe Portable Document Format (.pdf) file: 1,024 MB
Apache Avro object container (.avro) file: 8 GB
Apache Parquet (.parquet) file: 8 GB
Email message (.eml) file: 20 GB
GNU Zip compressed archive (.gz or .gzip) file: 8 GB
Microsoft Excel workbook (.xls or .xlsx) file: 512 MB
Microsoft Word document (.doc or .docx) file: 512 MB
Non-binary text file: 20 GB
TAR archive (.tar) file: 20 GB
ZIP compressed archive (.zip) file: 8 GB
If a file is larger than the applicable quota, Macie doesn't analyze any data in the file.
upvoted 4 times
2 months ago
I think B
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
The question requires a solution that meets the requirements with least development effort, not no development effort. Only D meets all the
requirements, with some development effort.
A : does not do any notification
B and C : do not remediate automatically
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: B
B is least development effort
upvoted 1 times
2 months, 4 weeks ago
Answer B is not the best answer because it only triggers a notification to the administrators to manually remove the objects that contain PII. This
requires manual intervention and may result in a delay in removing the PII. Additionally, it does not provide automated remediation, which is one
of the requirements. On the other hand, Answer D implements custom scanning algorithms in an AWS Lambda function that trigger an S3 Lifecycle
policy to automatically remove the objects that contain PII. It also sends a notification to the administrators using Amazon SES. This solution
provides both automated remediation and notification to the administrators, which satisfies the requirements.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: D
Answer D is a better choice than Answer B because it provides the additional capability of automating remediation. In Answer B, administrators are
only notified about the presence of PII in the S3 bucket, but they still have to manually remove the offending objects. In contrast, Answer D uses
AWS Lambda to automatically trigger a notification to administrators via Amazon SES and remove the files with PII through an S3 Lifecycle policy.
This means that the remediation process is automated and requires less manual effort from the administrators. Additionally, using Amazon SES to
send notifications provides greater flexibility in terms of message content and delivery options.
upvoted 2 times
1 month, 3 weeks ago
I agree to Answer D becasue of sentence that The company also wants to automate remediation.
upvoted 1 times
3 months, 1 week ago
I choose B because AWS Macie can detect PII with Least effort
upvoted 1 times
4 months ago
I think the question is vague....Macie will scan and detect sensitive data types including PII, so it points to B. But the keywords automate
remediation tells the Architect that he needs to do nothing when the problem is found. Then it points to D but how would a S3 Lifecycle removes
PII data? The question doesn't ask about archiving or storing for a length of time.
I'm confused as to which answer is right....maybe B because Macie automates identifying of the data
upvoted 3 times
3 months ago
Agree with you, not clear to me how S3 Lifecycle Management can remove specific files with PII. When you define a S3 lifecycle rule you can set
the scope of the rule with prefix and/or tag and then you can only set "Days after object becomes non-current" as a condition.
upvoted 1 times
Topic 1
Question #47
A company needs guaranteed Amazon EC2 capacity in three speci c Availability Zones in a speci c AWS Region for an upcoming event that will
last 1 week.
What should the company do to guarantee the EC2 capacity?
A. Purchase Reserved Instances that specify the Region needed.
B. Create an On-Demand Capacity Reservation that speci es the Region needed.
C. Purchase Reserved Instances that specify the Region and three Availability Zones needed.
D. Create an On-Demand Capacity Reservation that speci es the Region and three Availability Zones needed.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Reserved instances are for long term so on-demand will be the right choice - Answer D
upvoted 16 times
Highly Voted
6 months, 1 week ago
Selected Answer: D
***CORRECT***
Option D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
An On-Demand Capacity Reservation is a type of Amazon EC2 reservation that enables you to create and manage reserved capacity on Amazon
EC2. With an On-Demand Capacity Reservation, you can specify the Region and Availability Zones where you want to reserve capacity, and the
number of EC2 instances you want to reserve. This allows you to guarantee capacity in specific Availability Zones in a specific Region.
***WRONG***
Option A, purchasing Reserved Instances that specify the Region needed, would not guarantee capacity in specific Availability Zones.
Option B, creating an On-Demand Capacity Reservation that specifies the Region needed, would not guarantee capacity in specific Availability
Zones.
Option C, purchasing Reserved Instances that specify the Region and three Availability Zones needed, would not guarantee capacity in specific
Availability Zones as Reserved Instances do not provide capacity reservations.
upvoted 10 times
5 months, 1 week ago
Another reason as to why Reserved Instances aren't the solution here is that you have to commit to either a 1 year or 3 year term, not 1 week.
upvoted 9 times
Most Recent
1 week ago
Selected Answer: D
The most appropriate option to guarantee EC2 capacity in three specific Availability Zones in the desired AWS Region for the 1-week event is to
create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones (option D).
A. Purchasing Reserved Instances that specify the Region needed does not guarantee capacity in specific Availability Zones.
B. Creating an On-Demand Capacity Reservation without specifying the Availability Zones would not guarantee capacity in the desired zones.
C. Purchasing Reserved Instances that specify the Region and three Availability Zones is not necessary for a short-term event and involves longer-
term commitments.
upvoted 2 times
1 month, 1 week ago
Reserved instances is for long term
On-demand Capacity reservation enables you to choose specific AZ for any duration
upvoted 1 times
3 months, 2 weeks ago
Just for 1 week so D on demand
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
I agree that the answer is D because its only needed for a 1 week event. C would be right if it was a re-occurring event for 1 or more years as
reserved instances have to be purchased on long term commitments but would satisfy the capacity requirements.
https://aws.amazon.com/ec2/pricing/reserved-instances/
Community vote distribution
D (100%)
upvoted 1 times
4 months, 3 weeks ago
D. Reservations are used for long term. A minimum of 1 - 3 years making it cheaper. Whereas, on demand reservation is where you will always get
access to CAPACITY it either be 1 week in advance or 1 month in an AZ but you pay On-Demand price meaning there is no discount.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
Correct answer is On-Demand Capacity Reservation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
To guarantee EC2 capacity in specific Availability Zones, the company should create an On-Demand Capacity Reservation. On-Demand Capacity
Reservations are a type of EC2 resource that allows the company to reserve capacity for On-Demand instances in a specific Availability Zone or set
of Availability Zones. By creating an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed, the company
can guarantee that it will have the EC2 capacity it needs for the upcoming event. The reservation will last for the duration of the event (1 week) and
will ensure that the company has the capacity it needs to run its workloads.
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
D, specify the number of instances and AZs for a period of 1 week and then use them whenever needed.
A and C, aren't designed to provide guaranteed capacity
B, doesn't guarantee that EC2 capacity will be available in the three specific AZs
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
Answer D is correct.
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: D
Yes answer is D
upvoted 1 times
7 months ago
Selected Answer: D
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html#capacity-reservations-differences
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: D
Absolutely D
upvoted 1 times
7 months, 2 weeks ago
D is the correct answer
upvoted 1 times
Topic 1
Question #48
A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly
available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
A. Move the catalog to Amazon ElastiCache for Redis.
B. Deploy a larger EC2 instance with a larger instance store.
C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
D. Move the catalog to an Amazon Elastic File System (Amazon EFS) le system.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: D
keyword is "durable" location
A and B is ephemeral storage
C takes forever so is not HA,
that leaves D
upvoted 24 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
Elasticache is in Memory, EFS is for durability
upvoted 14 times
Most Recent
3 days, 20 hours ago
Amazon Elastic File System (Amazon EFS) provides a scalable and durable file storage service that can be mounted on multiple EC2 instances
simultaneously. By moving the catalog to an EFS file system, the data will be stored in a durable location with built-in redundancy. It will also be
accessible from multiple EC2 instances, ensuring high availability.
upvoted 1 times
1 week ago
Selected Answer: D
Option A is not suitable for storing the catalog as ElastiCache is an in-memory data store primarily used for caching and cannot provide durable
storage for the catalog.
Option B would not address the requirement for high availability or durability. Instance stores are ephemeral storage attached to EC2 instances and
are not durable or replicated.
Option C would provide durability but not high availability. S3 Glacier Deep Archive is designed for long-term archival storage, and accessing the
data from Glacier can have significant retrieval times and costs.
Therefore, option D is the most suitable choice to ensure high availability and durability for the company's catalog.
upvoted 2 times
3 weeks, 4 days ago
Selected Answer: A
Option A meets the requirements.
upvoted 1 times
1 month, 4 weeks ago
Elasticache is using cache functionality. EFS is for durability.
upvoted 1 times
2 months, 1 week ago
Selected Answer: D
You can technically store data in A, it's an in memory selection.. but it's nowhere near durable as EFS which is D.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
weird question with D the least incorrect option
Community vote distribution
D (93%)
7%
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Key word: durable
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
Amazon EFS is a fully managed, scalable, and highly available file storage service that provides durable and scalable storage for shared access to
files. It is designed to provide high availability and durability, with data stored across multiple availability zones (AZs) within a region.
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
To make the catalog highly available and store it in a durable location, a solutions architect should move the catalog from the instance store to an
Amazon EBS volume or an Amazon EFS file system. Option D is correct.
Option A, moving the catalog to Amazon ElastiCache for Redis, would improve performance by caching frequently accessed data, but it does not
provide durability or high availability for the catalog data.
Option B, deploying a larger EC2 instance with a larger instance store, would not provide durability because data on an instance store is lost when
the instance is stopped, terminated, or fails.
Option C, moving the catalog to Amazon S3 Glacier Deep Archive, would provide durability but not high availability, as it is designed for infrequent
access and retrieval times of several hours.
Therefore, option D is the best solution to meet the company's requirements. Moving the catalog to an Amazon EBS volume or an Amazon EFS file
system would provide durable storage and support high availability configurations.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: D
Amazon EFS is designed to be highly durable and highly available. https://aws.amazon.com/efs/faq/
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: D
D. Elastic cache is temporary, whereas EFS is regional so HA and durable.
upvoted 1 times
5 months ago
Selected Answer: D
What's durable and HA here?
It must be EFS as Elastic Cache is a Ephemeral storage only.
upvoted 1 times
5 months, 1 week ago
Must be A. Not D since EFS is used for a very different purpose concurrently accessing data between a large number of Linux instances. For simple
catalogue EFS will be a great waste.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
To make sure that the catalog is highly available and stored in a durable location, the solutions architect should move the catalog from the EC2
instance store to an Amazon Elastic File System (Amazon EFS) file system. Amazon EFS is a fully managed, elastic file storage service that is
designed to scale up and down as needed, providing a durable and highly available storage solution for data that needs to be accessed
concurrently from multiple Amazon EC2 instances. By moving the catalog to Amazon EFS, the company can ensure that the catalog is stored in a
durable location and is highly available for access by the website.
upvoted 1 times
5 months, 2 weeks ago
EFS is Linux only. How can we be sure as it is not mentioned if it is Linux based?
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: A
Elastic Cache is not durable by default
upvoted 1 times
5 months, 1 week ago
Why did you vote for ElastiCache then?
upvoted 2 times
Topic 1
Question #49
A company stores call transcript les on a monthly basis. Users access the les randomly within 1 year of the call, but users access the les
infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve les that are less than 1-year-
old as quickly as possible. A delay in retrieving older les is acceptable.
Which solution will meet these requirements MOST cost-effectively?
A. Store individual les with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the les from S3 Glacier Instant Retrieval.
B. Store individual les in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the les to S3 Glacier Flexible Retrieval after 1 year.
Query and retrieve the les that are in Amazon S3 by using Amazon Athena. Query and retrieve the les that are in S3 Glacier by using S3
Glacier Select.
C. Store individual les with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage.
Use S3 Lifecycle policies to move the les to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the les by searching for metadata
from Amazon S3.
D. Store individual les in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the les to S3 Glacier Deep Archive after 1 year.
Store search metadata in Amazon RDS. Query the les from Amazon RDS. Retrieve the les from S3 Glacier Deep Archive.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
I think the answer is B:
Users access the files randomly
S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or
retention period. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new
applications, and user-generated content.
https://aws.amazon.com/fr/s3/storage-classes/intelligent-tiering/
upvoted 30 times
1 month, 1 week ago
Answer is C, why not intelligent Tiering
If the Intelligent-Tiering data transitions to Glacier after 180 days instead of 1 year, it would still be a cost-effective solution that meets the
requirements.
With files stored in Amazon S3 Intelligent-Tiering, the data is automatically moved to the appropriate storage class based on its access patterns.
In this case, if the data transitions to Glacier after 180 days, it means that files that are infrequently accessed beyond the initial 180 days will be
stored in Glacier, which is a lower-cost storage option compared to S3 Standard.
upvoted 4 times
3 months, 4 weeks ago
What about if the file you have not accessed 360 days and intelligent tier moved the file to Glacier and on 364 day you want to access the file
instantly ?
I think C is right choice
upvoted 3 times
4 months ago
It says "S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns".
However, the statement says access pattern is predictable. It says there is frequent access about 1year.
upvoted 1 times
3 months, 2 weeks ago
it doesnt say predictable, it says files are accessed random. Random = Unpredictable. Answer is B
upvoted 4 times
Highly Voted
8 months, 2 weeks ago
The answer is B
upvoted 10 times
Most Recent
2 days, 14 hours ago
Community vote distribution
B (71%)
C (20%)
6%
Selected Answer: B
I asked ChatGPT In conclusion, C is possible, but C is not cost-effective.
Both options B and C can meet the requirements, but option B with S3 Intelligent-Tiering may provide more cost savings as it optimizes storage
costs based on access patterns, automatically moving files to the most appropriate tier. However, if the priority is primarily on fast retrieval times
for files less than 1-year-old and the cost difference is not a significant concern, option C with Amazon S3 Standard storage and S3 Glacier Instant
Retrieval can be a valid and cost-effective choice as well.
upvoted 1 times
1 week ago
Selected Answer: B
Option A would not optimize the retrieval of files less than 1-year-old, as the files would be stored in S3 Glacier, which has longer retrieval times
compared to S3 Intelligent-Tiering.
Option C adds complexity by involving two storage classes and may not provide the most cost-effective solution.
Option D would require additional infrastructure with RDS for storing metadata and retrieval from S3 Glacier Deep Archive, which may not be
necessary and could incur higher costs.
Option B is the most suitable and cost-effective solution for optimizing file retrieval based on the access patterns described. Amazon S3 Intelligent-
Tiering is a storage class that automatically moves objects between two access tiers: frequent access and infrequent access, based on their access
patterns. By storing the files in S3 Intelligent-Tiering, the files less than 1-year-old will be kept in the frequent access tier, allowing for quick
retrieval.
upvoted 2 times
1 week, 3 days ago
Selected Answer: B
The answer is (B) because of the random access on files and the query service needed is Athena.
upvoted 1 times
1 month, 1 week ago
For all of you that (incorrectly, in my opinion) select Answer B: you are forgetting that an object is moved to Deep Archive access after 180 days of
inactivity (here's the link with the details: https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html)
Considering the above, it could happen that an object is required after day 180 of the first year, in that case the object is not immediately
reachable, so one of the requirements are not met.
The correct answer should the C
upvoted 3 times
2 weeks, 3 days ago
It says: "before your specified number of days of no access ("for example", 180 days)" so this number of days is just an example. Also the files
are to be deleted one year after being made and this would keep the files for a specified time after the last use which means that if they are
used within that year it would be saved for more than the intended year. Therefore, b is correct.
upvoted 2 times
1 month, 1 week ago
Random> unpredictable> Intelligent-tiering
upvoted 1 times
1 week, 1 day ago
But it says that for the first year, access must be instant, so it can't be intelligent tiering, because if he moves do deep archive before 1 year, you
can't access instantly the resource
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
Key notes here:
1. "...randomly within 1 year of the call,.." Randomly = unpredictable -> Intelligent Tiering
2. "but users access the files infrequently after 1 year" coupled with "retrieve files that are less than 1-year-old as quickly as possible. A delay in
retrieving older files is acceptable" -> Glacier Flexible Retrieval (has options for expedited = 1-5 minutes, standard = 3-5 hours, and bulk = 5-12
hours).
Last but not least is "giving users the ability to QUERY". Query = Athena. It's literally a serverless query service to analyze data stored explicitly in
S3.
upvoted 4 times
1 month, 4 weeks ago
Access file randomly - triggers the option or Intelligent tiering. Glacier is for achievable and Athena for quick queries. Answer is B.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
S3 Intelligent-Tiering is the ideal for data with irregular access pattern. Since the requirement here states that files older that 1 year is retrieve
infrequently (the pattern is fixed), there is no need to use S3 Intelligent-Tiering, S3 Standard is more suitable. And data more than 1 year can be
moved to S3 glacier. In addition, answer C uses search metadata when storing the data, this allows the files to be retrieve as quickly as possible
when required. Thus answer should be C.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
La opción B parece ser la solución más rentable para cumplir con los requisitos de la empresa. Al almacenar los archivos en Amazon S3 Intelligent-
Tiering, se pueden utilizar las políticas de ciclo de vida para mover los archivos a S3 Glacier Flexible Retrieval después de 1 año, lo que permite
ahorrar costos de almacenamiento a largo plazo. Para acceder a los archivos que se encuentran en Amazon S3, se puede utilizar Amazon Athena, lo
que permite una recuperación rápida y eficiente de los archivos que tienen menos de 1 año. Para acceder a los archivos que se encuentran en S3
Glacier, se puede utilizar S3 Glacier Select, lo que permite la recuperación de datos selectiva y reduce los costos de recuperación. Esta solución
también es escalable, lo que significa que puede manejar grandes volúmenes de datos y un alto número de usuarios.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Key word 'durable' for a intelligent-tiering.
Athena for S3 query.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: B
Option B, Store individual files in Amazon S3 Intelligent-Tiering, is a cost-effective solution as it automatically moves objects between four access
tiers (frequent, infrequent, archive, and deep archive) based on changing access patterns and automatically optimizes costs for the company. The
S3 Lifecycle policies can be used to move files to S3 Glacier Flexible Retrieval after 1 year, which has a retrieval time of minutes to hours. Amazon
Athena can be used to query and retrieve files that are still in S3 Intelligent-Tiering, and S3 Glacier Select can be used to query and retrieve files
that have been moved to S3 Glacier Flexible Retrieval.
upvoted 1 times
3 months ago
"As quickly as possible" is the key point for retrieval of files which are less than 1 year old. So, Option C is the answer.
upvoted 1 times
2 months, 1 week ago
" S3 Glacier Instant Retrieval after 1 year."
the data after 1 year not require quick access . So it more expensive and not fit with requirement
upvoted 1 times
3 months, 1 week ago
Selected Answer: B
"
임의로
파일
액세스
"
upvoted 1 times
4 months ago
Selected Answer: B
I originally thought C but changed my mind to B.
Intelligent tiering will always only move object to object storage classes with milliseconf latency
https://aws.amazon.com/s3/storage-classes/
I was originally concerned a file would go to some storage class after several months but before a year to a storage class with higher latency but
that is not the case.
upvoted 1 times
4 months ago
I disagree with B, it says clearly access are less than 1-year-old as quickly as possible, use intelligent, if a data is not accessed after 3 months it will
be moved to archive then you lose this requirement.
upvoted 7 times
Topic 1
Question #50
A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The
company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?
A. Create an AWS Lambda function to apply the patch to all EC2 instances.
B. Con gure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
The primary focus of Patch Manager, a capability of AWS Systems Manager, is on installing operating systems security-related updates on managed
nodes. By default, Patch Manager doesn't install all available patches, but rather a smaller set of patches focused on security. (Ref
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-how-it-works-selection.html)
Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. (Ref
https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html)
Seems like patch manager is meant for OS level patches and not 3rd party applications. And this falls under run command wheelhouse to carry out
one-time configuration changes (update of 3rd part application) at scale.
upvoted 25 times
Highly Voted
6 months, 2 weeks ago
D
AWS Systems Manager Run Command allows the company to run commands or scripts on multiple EC2 instances. By using Run Command, the
company can quickly and easily apply the patch to all 1,000 EC2 instances to remediate the security vulnerability.
Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not designed
to run on EC2 instances. Configuring AWS Systems Manager Patch Manager to apply the patch to all EC2 instances would not be a suitable
solution, as Patch Manager is not designed to apply third-party software patches. Scheduling an AWS Systems Manager maintenance window to
apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed to apply patches to third-party
software
upvoted 14 times
Most Recent
1 week ago
Selected Answer: B
SSM Patch Manager offers a centralized and automated approach to patch management, allowing administrators to efficiently manage patching
operations across a large number of instances. It provides features such as patch compliance reporting and ability to specify maintenance windows
to control timing of patch installations.
A suggests using an Lambda to apply the patch. It requires additional development effort to create and manage it, handle error handling and
retries, and scale solution appropriately to handle a large number of instances.
C suggests scheduling an SSM maintenance window. While it can be used to orchestrate patching activities, they may not provide fastest patching
time for all instances, as execution is scheduled within defined maintenance window timeframe.
D suggests using Run Command to run a custom command for patching. While it can be used for executing commands on multiple instances, it
requires manual execution and may not provide the scalability and automation capabilities that Patch Manager offers.
upvoted 2 times
2 weeks, 2 days ago
Selected Answer: D
answer D
keyword - The workload is powered by third-party software
patch manager patches AWS managed nodes OSs
we don't know what is running on the ec2 and what kind of vulnerability is that
upvoted 1 times
1 month, 1 week ago
Since it's a third-party application then use custom command to apply patches manually on all EC2's.
upvoted 1 times
1 month, 3 weeks ago
Community vote distribution
D (75%)
B (25%)
Selected Answer: B
Answer of ChatGPT: "To remediate the critical security vulnerability in the third-party software running on 1,000 Amazon EC2 instances, the most
appropriate solution is to use AWS Systems Manager Patch Manager to apply the patch to all instances. AWS Systems Manager Patch Manager
automates the process of patching instances across hybrid environments and reduces the time and effort required to patch instances. Patch
Manager enables administrators to select and approve patches for automatic deployment to instances in a controlled and secure manner. The
patching process can be scheduled, tracked, and automated using Patch Manager, which also provides compliance reporting and dashboards. By
using Patch Manager, the solutions architect can quickly and efficiently patch all EC2 instances and ensure that the workload remains secure."
upvoted 1 times
1 month, 1 week ago
okay here, ChatGPT is insanely in accurate, when I ask ChatGPT a question on this
I first copy paste the question, then I write "the correct answer is [whatever the correct answer determined by the discussion]"
Then I get correct information on why the other answer choices are wrong and why the correct answer choice is correct
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: D
My answer is D for 3rd party patching.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
why not D: using AWS Systems Manager Run Command could also be used to run a custom command to apply the patch to all EC2 instances, but
it requires creating and testing the command manually, which could be time-consuming. Additionally, 'B' option- Patch Manager has more features
and capabilities that can help in managing patches, including scheduling patch deployments and reporting on patch compliance.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Systems Manager Patch Manager can work patching linux boxes, and ALL de instances are Linux. See:
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-windows-and-linux-differences.html
So, using Patch Mnt. you can manage the deploy (with policies, creating groups, etc), so it's the best and more secure way to do it.
upvoted 3 times
2 months, 4 weeks ago
Selected Answer: D
AWS Systems Manager Run Command provides a simple way of automating common administrative tasks across groups of instances. It allows
users to execute scripts or commands across multiple instances simultaneously, without requiring SSH or RDP access to each instance. With AWS
Systems Manager Run Command, users can easily manage Amazon EC2 instances and instances running on-premises or in other cloud
environments.
upvoted 3 times
3 months ago
B.- Systems Manager – Patch Manager for OS updates, applications updates, security
updates. Supports Linux, macOS, and Windows
upvoted 1 times
3 months ago
Selected Answer: D
D is good for me.
upvoted 1 times
4 months, 1 week ago
D
System Manager Run Command giúp chạy một custom command và tải các bản vá về các EC2 instance. Đây là phương án hợp lý, phù hợp cho use
case này.
upvoted 1 times
4 months, 1 week ago
e đọc bộ dump bên mark4sure thì đáp án là B. tí banh :')
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
D = Third Party Workload. Use Run Command.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: D
To quickly apply a patch to the third-party software on all EC2 instances, the solutions architect can use AWS Systems Manager Run Command. Run
Command is a feature of AWS Systems Manager that allows you to remotely and securely run shell scripts or Windows PowerShell commands on
EC2 instances. By using Run Command, the solutions architect can quickly and easily apply the patch to all EC2 instances by running a custom
command. This will allow the company to quickly and efficiently remediate the critical security vulnerability without the need to manually patch
each instance or create a custom solution such as an AWS Lambda function or maintenance window.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
D quickest soluion.
upvoted 1 times
5 months, 3 weeks ago
New answer is B : You can use Patch Manager to apply patches for both operating systems and applications
upvoted 3 times
Topic 1
Question #51
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the
shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every
morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Con gure the application to send the data to Amazon Kinesis Data Firehose.
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API
for the data.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the
application's API for the data.
E. Store the application data in Amazon S3. Create an Amazon Simple Noti cation Service (Amazon SNS) topic as an S3 event destination to
send the report by email.
Correct Answer:
DE
Highly Voted
8 months, 2 weeks ago
Selected Answer: BD
You can use SES to format the report in HTML.
https://docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html
upvoted 23 times
2 months, 4 weeks ago
this document is talking about the SES API. not ses. SES does not format data. just sending emails.
https://aws.amazon.com/ses/
upvoted 3 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: BD
B&D are the only 2 correct options. If you are choosing option E then you missed the daily morning schedule requirement mentioned in the
question which cant be achieved with S3 events for SNS. Event Bridge can used to configure scheduled events (every morning in this case). Option
B fulfills the email in HTML format requirement (by SES) and D fulfills every morning schedule event requirement (by EventBridge)
upvoted 15 times
Most Recent
2 days, 17 hours ago
Selected Answer: DE
D: To schedule the event every morning and format the HTML
E: To store the HTML in S3 and send the email using SNS
upvoted 1 times
2 days, 17 hours ago
Selected Answer: DE
SES cannot format the data.
upvoted 1 times
1 week ago
Selected Answer: BD
D: Create an EventBridge (CloudWatch Events) scheduled event that invokes Lambda to query API for data. This scheduled event can be set to
trigger at desired time every morning to fetch shipping statistics from API.
B: Use SES to format data and send report by email. In Lambda, after retrieving shipping statistics, you can format data into an easy-to-read HTML
format using any HTML templating framework.
Options A, C, and E are not necessary for achieving the desired outcome. Ooption A is typically used for real-time streaming data ingestion and
delivery to data lakes or analytics services. Glue (C) is a fully managed extract, transform, and load (ETL) service, which may be an overcomplication
for this scenario. Storing the application data in S3 and using SNS (E) can be an alternative approach, but it adds unnecessary complexity.
upvoted 2 times
1 week, 6 days ago
Selected Answer: CE
Community vote distribution
BD (70%)
DE (15%)
Other
Explanation:
D. By creating an Amazon EventBridge scheduled event that triggers an AWS Lambda function, you can automate the process of querying the
application's API for shipping statistics. The Lambda function can retrieve the data and perform any necessary formatting or transformation before
proceeding to the next step.
E. Storing the application data in Amazon S3 allows for easy accessibility and further processing. You can configure an S3 event notification to
trigger an Amazon Simple Notification Service (SNS) topic whenever new data is uploaded to the S3 bucket. The SNS topic can be configured to
send the report by email to the desired email addresses.
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: BD
I go for BD options.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BC
extract and transform the data
AWS Glue is not always used for ETL processes that deal with unstructured data. Glue can also be used for ETL processes that deal with structured
data. Glue provides a fully managed ETL service that makes it easy to move data between data stores. It can be used to transform and clean data in
a scalable and cost-effective manner, and it supports a wide range of data formats, including both structured and unstructured data.
upvoted 3 times
1 month, 3 weeks ago
BC
extract and transform the data
AWS Glue is not always used for ETL processes that deal with unstructured data. Glue can also be used for ETL processes that deal with structured
data. Glue provides a fully managed ETL service that makes it easy to move data between data stores. It can be used to transform and clean data in
a scalable and cost-effective manner, and it supports a wide range of data formats, including both structured and unstructured data.
upvoted 2 times
2 months, 1 week ago
Selected Answer: DE
In summary, option E is chosen because it provides a way to extract and organize the data into an easy-to-read HTML format and send it via email
using Amazon S3 and Amazon SNS, respectively. While option B can be used to send the email, it does not provide a way to extract, organize, or
store the data, which is a requirement in this case.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: DE
SES is not used for formatting data . Whereas Email service can subscribe from SNS Topic.
Answer is DE
upvoted 2 times
5 months, 1 week ago
Selected Answer: BD
You can't use SNS for HTML e-mails
upvoted 4 times
5 months, 1 week ago
Selected Answer: BD
https://kennbrodhagen.net/2016/01/31/how-to-return-html-from-aws-api-gateway-lambda/
upvoted 1 times
5 months, 1 week ago
Selected Answer: BD
For anyone confused with Option E, I don't think the issue comes from the first part, i.e. using S3 notification every time in the morning. It may not
be 100% right as the lambda function needs the help of EventBridge Rule to run on a schedule. But in general, the S3 notification can be triggered
as the new object is uploaded by the lambda function.
The REAL problem comes from the second part of the statement, i.e. using SNS to send email. It is true that SNS can send emails, BUT it cannot be
used to send HTML formatted emails as SNS could handle.
https://stackoverflow.com/questions/32241928/sending-html-content-in-aws-snssimple-notification-service-emails-notification
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: BD
To meet the requirements, the solutions architect can create an Amazon EventBridge (formerly known as Amazon CloudWatch Events) scheduled
event that invokes an AWS Lambda function to query the application's API for the data. The scheduled event can be configured to run at the
desired time every morning. The Lambda function can be responsible for querying the API, formatting the data into an HTML format, and sending
the report by email using Amazon Simple Email Service (Amazon SES).
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: BC
Why is no one noticing the 'extract' key word? That's key for using Glue. Eventbridge can trigger Glue which extracts from the API and transforms
the data to send it to SES.
upvoted 4 times
5 months, 3 weeks ago
AWS Glue is always used for ETL processes that deal with unstructured data. When using Glue, usually the data will be sent to big data storage
like Redshift. It is seldomly used for just sending email.
Lambda can easily get API data and do any filtering, let say some python code to extract JSON from API.
upvoted 6 times
6 months ago
Selected Answer: BD
With SNS you can't customize the body of the email message. The email delivery feature is intended to provide internal system alerts
upvoted 2 times
Topic 1
Question #52
A company wants to migrate its on-premises application to AWS. The application produces output les that vary in size from tens of gigabytes to
hundreds of terabytes. The application data must be stored in a standard le system structure. The company wants a solution that scales
automatically. is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?
A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.
B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store
(Amazon EBS) for storage.
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for
storage.
D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for
storage.
Correct Answer:
C
Highly Voted
8 months, 1 week ago
Selected Answer: C
EFS is a standard file system, it scales automatically and is highly available.
upvoted 17 times
Highly Voted
8 months, 2 weeks ago
I have absolutely no idea...
Output files that vary in size from tens of gigabytes to hundreds of terabytes
Simit size for a single object:
S3 5To TiB
https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/
EBS 64 Tib
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html
EFS 47.9 TiB
https://docs.aws.amazon.com/efs/latest/ug/limits.html
upvoted 8 times
4 months, 1 week ago
The answer to that is
Limit size for a single object:
S3, 5TiB is per object but you can have more than one object in a bucket, meaning infinity
https://aws.amazon.com/fr/blogs/aws/amazon-s3-object-size-limit/
EBS 64 Tib is per block of storage
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_constraints.html
EFS 47.9 TiB per file and in the questions its says Files the 's'
https://docs.aws.amazon.com/efs/latest/ug/limits.html
upvoted 1 times
6 months, 2 weeks ago
None meets 100s of TB / file. Bit confusing / misleading
upvoted 2 times
6 months, 3 weeks ago
S3 and EBS are block storage but you are looking to store files, so EFS is the correct option.
upvoted 1 times
5 months, 2 weeks ago
S3 is object storage.
upvoted 8 times
Most Recent
1 week ago
Selected Answer: C
EFS provides a scalable and fully managed file system that can be easily mounted to multiple EC2. It allows you to store and access files using the
standard file system structure, which aligns with the company's requirement for a standard file system. EFS automatically scales with the size of
your data.
Community vote distribution
C (100%)
A suggests using ECS for container orchestration and S3 for storage. ECS doesn't offer a native file system storage solution. S3 is an object storage
service and may not be the most suitable option for a standard file system structure.
B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system access.
While EKS can manage containers, it doesn't specifically address the file storage requirements.
D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn't inherently offer a scalable file system solution
like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: C
Option C meets the requirements.
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: C
Keywords: file system structure, scales automatically, highly available, and minimal operational overhead
upvoted 1 times
4 months, 2 weeks ago
standard file system structure is the KEYWORD here, the S3 and EBS are not file based storage. EFS is. so the automatic answer is C
upvoted 1 times
5 months ago
Selected Answer: C
I will go with C as If the app is deployed in MultiAZ, computes are different but the Storage needs to be common.
EFS is easist way to configure shared storage as compared to SHARED EBS.
Hence C Suits the best.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: C
Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C = File storage system, Multi AZ ASG lets you maintain high availability
Not A, B or D because they don't meet the requirement of file system storage
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
To meet the requirements, a solution that would allow the company to migrate its on-premises application to AWS and scale automatically, be
highly available, and require minimum operational overhead would be to migrate the application to Amazon Elastic Compute Cloud (Amazon EC2)
instances in a Multi-AZ (Availability Zone) Auto Scaling group.
upvoted 1 times
6 months, 1 week ago
The Auto Scaling group would allow the application to automatically scale up or down based on demand, ensuring that the application has the
required capacity to handle incoming requests. To store the data produced by the application, the company could use Amazon Elastic File
System (Amazon EFS), which is a file storage service that allows the company to store and access file data in a standard file system structure.
Amazon EFS is highly available and scales automatically to support the workload of the application, making it a good choice for storing the data
produced by the application.
upvoted 2 times
1 month, 3 weeks ago
my only question is : since EFS is also highly available and scalable, why not use EFS alone in this case? Is there any suggestion for using Auto
Scaling as a must.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C. Using EBS as storage is not a right option as it will not scale automatically.
Using ECS and EKS for running the application is not a requirement here and it is not clearly mentioned that application can be containerized or
not.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: C
Highly available & Autoscales == Multi-AZ Auto Scaling group.
Standard File System == Amazon Elastic File System (Amazon EFS)
upvoted 3 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/84147-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
8 months, 1 week ago
Selected Answer: C
standard file system => EFS rather than S3
upvoted 2 times
8 months, 1 week ago
EBS doesn't offer high availability, data is stored in one AZ.
upvoted 2 times
Topic 1
Question #53
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be
archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during
the entire 10-year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?
A. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10
years.
B. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to
allow deletion.
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in
compliance mode for a period of 10 years.
D. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use
S3 Object Lock in governance mode for a period of 10 years.
Correct Answer:
C
1 week ago
Selected Answer: C
To prevent deletion of records during the entire 10-year period, you can utilize S3 Object Lock feature. By enabling it in compliance mode, you can
set a retention period on the objects, preventing any user, including administrative and root users, from deleting records.
A: S3 Glacier is suitable for long-term archival, it may not provide immediate accessibility for the first year as required.
B: Intelligent-Tiering may not offer the most cost-effective archival storage option for extended 9-year period. Changing the IAM policy after 10
years to allow deletion also introduces manual steps and potential human error.
D: While S3 One Zone-IA can provide cost savings, it doesn't offer the same level of resiliency as S3 Glacier Deep Archive for long-term archival.
upvoted 2 times
2 months ago
Selected Answer: C
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 3 times
3 months, 2 weeks ago
Selected Answer: C
Retention Period: A period is specified by Days & Years.
With Retention Compliance Mode, you can’t change/adjust (even by the account root user) the retention mode during the retention period while
all objects within the bucket are Locked.
With Retention Governance mode, a less restrictive mode, you can grant special permission to a group of users to adjust the Lock settings by using
S3:BypassGovernanceRetention.
Legal Hold: It’s On/Off setting on an object version. There is no retention period. If you enable Legal Hole on specific object version, you will not be
able to delete or override that specific object version. It needs S:PutObjectLegalHole as a permission.
upvoted 2 times
4 months ago
Selected Answer: C
S3 Glacier Deep Archive all day....
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance
mode for a period of 10 years.
upvoted 1 times
6 months ago
Selected Answer: C
Community vote distribution
C (100%)
Use S3 Object Lock in compliance mode
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
C, A lifecycle set to transition from standard to Glacier deep archive and use lock for the delete requirement
A, B and D don't meet the requirements
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance
mode for a period of 10 years.
To meet the requirements, the company could use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after
1 year. S3 Glacier Deep Archive is Amazon's lowest-cost storage class, specifically designed for long-term retention of data that is accessed rarely.
This would allow the company to store the records with maximum resiliency and at the lowest possible cost.
upvoted 2 times
6 months, 1 week ago
To ensure that the records are not deleted during the entire 10-year period, the company could use S3 Object Lock in compliance mode. S3
Object Lock allows the company to apply a retention period to objects in S3, preventing the objects from being deleted until the retention
period expires. By using S3 Object Lock in compliance mode, the company can ensure that the records are not deleted by anyone, including
administrative users and root users, during the entire 10-year period.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
A and B are ruled out as you need them to be accessible for 1 year and using control policy or IAM policies, the administrator or root still has the
ability to delete them.
D is ruled out as it uses One Zone-IA, but requirement says max- resiliency.
SO- C should be the right answer.
upvoted 4 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
They should've put Glacier Vault Lock into Option C to make it even more obvious
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
C is the answer that fulfill the requirements of immediate access for one year and data durability for 10 years
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
A-Wrong as the records must be immediately accessble for the first year.
B-The question never mentioned about the records can be deleted or modified after 10-year period.
D-It does not fulfill the condition of securing resiliency; you need multi-AZ to guarantee it.
Therefore, the answer is C.
upvoted 2 times
7 months, 4 weeks ago
Selected Answer: C
ans is C
upvoted 1 times
8 months, 1 week ago
Selected Answer: C
sure for C
upvoted 1 times
8 months, 2 weeks ago
CCCCCCCCC
upvoted 1 times
Topic 1
Question #54
A company runs multiple Windows workloads on AWS. The company's employees use Windows le shares that are hosted on two Amazon EC2
instances. The le shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable
storage solution that preserves how users currently access the les.
What should a solutions architect do to meet these requirements?
A. Migrate all the data to Amazon S3. Set up IAM authentication for users to access les.
B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.
C. Extend the le share environment to Amazon FSx for Windows File Server with a Multi-AZ con guration. Migrate all the data to FSx for
Windows File Server.
D. Extend the le share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ con guration. Migrate all the data to
Amazon EFS.
Correct Answer:
C
Highly Voted
6 months ago
Selected Answer: C
EFS is not supported on Windows instances
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 8 times
Highly Voted
7 months, 1 week ago
Selected Answer: C
Windows file shares = Amazon FSx for Windows File Server
Hence, the correct answer is C
upvoted 5 times
6 months, 1 week ago
Taking back this answer. As explained in the latest update.
***CORRECT***
D: Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon
EFS.
upvoted 1 times
Most Recent
1 week ago
Selected Answer: C
Migrating all the data to FSx for Windows File Server allows you to preserve existing user access method and maintain compatibility with Windows
file shares. Users can continue accessing files using the same method as before, without any disruptions.
A: S3 is a highly durable object storage service, it is not designed to directly host Windows file shares. Implementing IAM authentication for file
access would require significant changes to existing user access method.
B: S3 File Gateway can provide access to Amazon S3 objects through standard file protocols, it may not be ideal solution for preserving existing
user access method and maintaining Windows file shares.
D: Although Amazon EFS provides highly available and durable file storage, it may not directly support the existing Windows file shares and their
access method.
upvoted 2 times
2 months ago
Selected Answer: C
https://aws.amazon.com/fsx/windows/faqs/
Thousands of compute instances and devices can access a file system concurrently.
EFS does not support Windows
upvoted 2 times
2 months, 1 week ago
Selected Answer: C
C is correct. Amazon FSx for Windows File Server.
Community vote distribution
C (98%)
upvoted 3 times
2 months, 1 week ago
Selected Answer: C
EFS is not supported on Windows instances
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/AmazonEFS.html
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 3 times
2 months, 2 weeks ago
Selected Answer: C
C is correct. Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: C
Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: C
https://aws.amazon.com/blogs/aws/amazon-fsx-for-windows-file-server-update-new-enterprise-ready-features/
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
The best option to meet the requirements specified in the question is option D: Extend the file share environment to Amazon Elastic File System
(Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.
Amazon EFS is a fully managed, elastic file storage service that scales on demand. It is designed to be highly available, durable, and secure, making
it well-suited for hosting file shares. By using a Multi-AZ configuration, the file share will be automatically replicated across multiple Availability
Zones, providing high availability and durability for the data.
To migrate the data, you can use a variety of tools and techniques, such as Robocopy or AWS DataSync. Once the data has been migrated to EFS,
you can simply update the file share configuration on the existing EC2 instances to point to the EFS file system, and users will be able to access the
files in the same way they currently do.
upvoted 1 times
5 months, 2 weeks ago
EFS is not support by windows.
upvoted 4 times
4 months ago
You're 100% right Ello2023. I humbly acknowledged my first answer was WRONG. I am changing my answer. "The correct answer is Option
C". Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for
Windows File Server.
upvoted 5 times
6 months, 1 week ago
Option A, migrating all the data to Amazon S3 and setting up IAM authentication for user access, would not preserve the current file share
access methods and would require users to access the files in a different way.
Option B, setting up an Amazon S3 File Gateway, would not provide the high availability and durability needed for hosting file shares.
Option C, extending the file share environment to FSx for Windows File Server, would provide the desired high availability and durability, but
would also require users to access the files in a different way.
upvoted 3 times
6 months, 1 week ago
EFS is for Linux only not Windows
upvoted 1 times
6 months ago
You're right Ronald Chow. Thanks! Option D is incorrect because Amazon Elastic File System (EFS) is a file storage service that is not natively
compatible with the Windows operating system, and would not preserve the existing access methods for users.
I am taking back my answer. "The correct answer is Option C". Extend the file share environment to Amazon FSx for Windows File Server with
a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
upvoted 6 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
D
Amazon EFS is fully compatible with the SMB protocol that is used by Windows file shares, which means that users can continue to access the files
in the same way they currently do. Extending the file share environment to FSx for Windows File Server with a Multi-AZ configuration would not be
a suitable solution, as FSx for Windows File Server is not as scalable or cost-effective as Amazon EFS.
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: C
EFS is only for Linux.
upvoted 3 times
7 months, 1 week ago
Selected Answer: C
EFS is only for Linux.
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
FSX---> SMB
upvoted 2 times
8 months ago
Selected Answer: C
C
가
올바릅니다
upvoted 3 times
Topic 1
Question #55
A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2
instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a
public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the
RDS databases.
Which solution will meet these requirements?
A. Create a new route table that excludes the route to the public subnets' CIDR blocks. Associate the route table with the database subnets.
B. Create a security group that denies inbound tra c from the security group that is assigned to instances in the public subnets. Attach the
security group to the DB instances.
C. Create a security group that allows inbound tra c from the security group that is assigned to instances in the private subnets. Attach the
security group to the DB instances.
D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the
private subnets and the database subnets.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
A: doesn't fully configure the traffic flow
B: security groups don't have deny rules
D: peering is mostly between VPCs, doesn't really help here
answer is C, most mainstream way
upvoted 29 times
Highly Voted
3 months, 4 weeks ago
Just took the exam today and EVERY ONE of the questions came from this dump. Memorize it all. Good luck.
upvoted 14 times
Most Recent
1 week ago
Selected Answer: C
Creating security group that allows inbound traffic from security group assigned to instances in private subnets ensures that only EC2 running in
private subnets can access the RDS databases. By associating security group with DB, you restrict access to only instances that belong to
designated security group.
A: This approach may help control routing within VPC, it does not address the specific access requirement between EC2 instances and RDS
databases.
B: Using a deny rule in a security group can lead to complexities and potential misconfigurations. It is generally recommended to use allow rules to
explicitly define access permissions.
D: Peering connections enable communication between different VPCs or VPCs in different regions, and they are not necessary for restricting
access between subnets within the same VPC.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: C
Option C meets the requirements.
upvoted 1 times
1 month, 1 week ago
By default, a security group is set up with rules that deny all inbound traffic and permit all outbound traffic.
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: C
CCCCCCCCCCC
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Community vote distribution
C (100%)
Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security
group to the DB instances. This will allow the EC2 instances in the private subnets to have access to the RDS databases while denying access to the
EC2 instances in the public subnets.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
The solution that meets the requirements described in the question is option C: Create a security group that allows inbound traffic from the
security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
In this solution, the security group applied to the DB instances allows inbound traffic from the security group assigned to instances in the private
subnets. This ensures that only EC2 instances running in the private subnets can have access to the RDS databases.
upvoted 3 times
6 months, 1 week ago
Option A, creating a new route table that excludes the route to the public subnets' CIDR blocks and associating it with the database subnets,
would not meet the requirements because it would block all traffic to the database subnets, not just traffic from the public subnets.
Option B, creating a security group that denies inbound traffic from the security group assigned to instances in the public subnets and attaching
it to the DB instances, would not meet the requirements because it would allow all traffic from the private subnets to reach the DB instances,
not just traffic from the security group assigned to instances in the private subnets.
Option D, creating a new peering connection between the public subnets and the private subnets and a different peering connection between
the private subnets and the database subnets, would not meet the requirements because it would allow all traffic from the private subnets to
reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
The real trick is between B and C. A and D are ruled out for obvious reasons.
B is wrong as you cannot have deny type rules in Security groups.
So- C is the right answer.
upvoted 4 times
7 months ago
Selected Answer: C
The key is "Only EC2 instances that run in the private subnets can have access to the RDS databases"
The answer is C.
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
Ans correct.
upvoted 2 times
8 months, 2 weeks ago
Selected Answer: C
Inside a VPC, traffic locally between different subnets cannot be restricted by routing but incase they are in different VPCs then it would be
possible. This is imp Gain in VPC
- So only method is Security Groups - like EC2 also RDS also has Security Groups to restrict traffic to database instances
upvoted 6 times
Topic 1
Question #56
A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public
interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL
with the company's domain name and corresponding certi cate so that the third-party services can use HTTPS.
Which solution will meet these requirements?
A. Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import
the public certi cate associated with the company's domain name into AWS Certi cate Manager (ACM).
B. Create Route 53 DNS records with the company's domain name. Point the alias record to the Regional API Gateway stage endpoint. Import
the public certi cate associated with the company's domain name into AWS Certi cate Manager (ACM) in the us-east-1 Region.
C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certi cate associated with the company's domain name into AWS Certi cate Manager (ACM) in the same Region. Attach the certi cate to the
API Gateway endpoint. Con gure Route 53 to route tra c to the API Gateway endpoint.
D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public
certi cate associated with the company's domain name into AWS Certi cate Manager (ACM) in the us-east-1 Region. Attach the certi cate to
the API Gateway APIs. Create Route 53 DNS records with the company's domain name. Point an A record to the company's domain name.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
I think the answer is C. we don't need to attach a certificate in us-east-1, if is not for cloudfront. In our case the target is ca-central-1.
upvoted 23 times
8 months, 2 weeks ago
I think that is C too, the target would be the same Region.
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html
upvoted 7 times
Highly Voted
6 months, 1 week ago
Selected Answer: C
The correct solution to meet these requirements is option C.
To design the API Gateway URL with the company's domain name and corresponding certificate, the company needs to do the following:
1. Create a Regional API Gateway endpoint: This will allow the company to create an endpoint that is specific to a region.
2. Associate the API Gateway endpoint with the company's domain name: This will allow the company to use its own domain name for the API
Gateway URL.
3. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region: This will allow
the company to use HTTPS for secure communication with its APIs.
4. Attach the certificate to the API Gateway endpoint: This will allow the company to use the certificate for securing the API Gateway URL.
5. Configure Route 53 to route traffic to the API Gateway endpoint: This will allow the company to use Route 53 to route traffic to the API Gateway
URL using the company's domain name.
upvoted 18 times
2 days, 1 hour ago
google bard reply..
upvoted 1 times
6 months, 1 week ago
Option C includes all the necessary steps to meet the requirements, hence it is the correct solution.
Options A and D do not include the necessary steps to associate the API Gateway endpoint with the company's domain name and attach the
certificate to the endpoint.
Option B includes the necessary steps to associate the API Gateway endpoint with the company's domain name and attach the certificate, but it
imports the certificate into the us-east-1 Region instead of the ca-central-1 Region where the API Gateway is located.
upvoted 5 times
Community vote distribution
C (97%)
Most Recent
1 week ago
Selected Answer: C
Option C encompasses all the necessary steps to design the API Gateway URL with the company's domain name and enable secure HTTPS access
using the appropriate certificate.
A. This approach does not involve using the company's domain name or a custom certificate. It does not provide a solution for enabling HTTPS
access with a corresponding certificate.
B. It suggests importing the certificate into ACM in the us-east-1 Region, which may not align with the desired ca-central-1 Region for this scenario.
It's important to use ACM in the same Region where API Gateway is deployed to simplify certificate management.
D. It suggests importing the certificate into ACM in the us-east-1 Region, which again does not align with the desired ca-central-1 Region.
Additionally, it mentions attaching the certificate to API Gateway, which is not necessary for achieving the desired outcome of enabling HTTPS
access for the API Gateway endpoint.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: C
I switch to option C too, which meets the requirements.
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: D
I vote for option D.
upvoted 1 times
1 month, 1 week ago
https://www.youtube.com/watch?v=Ro0rgeLDkO4
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
C: It should be in the same Region
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: C
In this scenario, the goal is to design the API Gateway URL with the company's domain name and corresponding certificate so that third-party
services can use HTTPS. To accomplish this, a solutions architect should create a Regional API Gateway endpoint and associate it with the
company's domain name. The public certificate associated with the company's domain name should be imported into AWS Certificate Manager
(ACM) in the same Region as the API Gateway endpoint. The certificate should then be attached to the API Gateway endpoint to enable HTTPS.
Finally, Route 53 should be configured to route traffic to the API Gateway endpoint.
upvoted 2 times
3 months, 2 weeks ago
ACM is always in US east 1
upvoted 2 times
3 months, 3 weeks ago
In the solution I provided, the region used for AWS Certificate Manager (ACM) is us-east-1, which is different from the ca-central-1 region used for
Amazon API Gateway in the question. This is because ACM certificates can only be issued in the us-east-1 region, which is a global endpoint for
ACM.
When creating a custom domain name in Amazon API Gateway and attaching an ACM certificate to it, the region of the certificate does not have to
match the region of the API Gateway deployment. However, it's worth noting that there may be additional latency or costs associated with using a
certificate from a different region.
In summary, the solution I provided is still valid and meets the requirements of the question, even though it uses a different region for ACM...pum!
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
It's C: You can use an ACM certificate in API Gateway.
https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html
Certificates are regional and have to be uploaded in the same AWS Region as the service you're using it for. (If you're using a certificate with
CloudFront, you have to upload it into US East (N. Virginia).)
https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
upvoted 3 times
6 months, 1 week ago
Certificates in ACM are regional resources. To use a certificate with Elastic Load Balancing for the same fully qualified domain name (FQDN) or set
of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided by ACM, this means you
must revalidate each domain name in the certificate for each region. You cannot copy a certificate between regions
upvoted 1 times
6 months, 1 week ago
C correct ans
Edge-Optimized (default): For global clients
• Requests are routed through the CloudFront Edge locations
(improves latency)
• The API Gateway still lives in only one region
• The TLS Certificate must be in the same region as
CloudFront, in us-east-1
• Then setup CNAME or (better) A-Alias record in Route 53
upvoted 1 times
6 months, 1 week ago
C is the answer. As per the first line in question Route 53 already has registered DNS name for the company so there is no additional steps needed
in Route 53.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
Can't be D as an A record also can only point to IP address and not a domain name
upvoted 2 times
7 months ago
Selected Answer: C
Cert should be in the same region.
Answer: C
upvoted 1 times
7 months ago
Selected Answer: D
I choose D since the company wants its own domain name - should not be a regional one. Even though the answer does not mention edge-
optimized custom domain name, this setup has to use it.
upvoted 1 times
6 months, 3 weeks ago
You misunderstand the term regional. This has no impact on the domain name, but instead refers to Regional and Edge-Optimized are
deployment options, see https://stackoverflow.com/questions/49826230/regional-edge-optimized-api-gateway-vs-regional-edge-optimized-
custom-domain-nam
upvoted 3 times
Topic 1
Question #57
A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The
company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development
effort.
What should a solutions architect do to meet these requirements?
A. Use Amazon Comprehend to detect inappropriate content. Use human review for low-con dence predictions.
B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-con dence predictions.
C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-con dence predictions.
D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-con dence
predictions.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Good Answer is B :
https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html?pg=ln&sec=ft
upvoted 13 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
The best solution to meet these requirements would be option B: Use Amazon Rekognition to detect inappropriate content, and use human review
for low-confidence predictions.
Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-trained label
detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent content, and offensive
language. The service provides high accuracy and low latency, making it a good choice for this use case.
upvoted 7 times
6 months, 1 week ago
Option A, using Amazon Comprehend, is not a good fit for this use case because Amazon Comprehend is a natural language processing service
that is designed to analyze text, not images.
Option C, using Amazon SageMaker to detect inappropriate content, would require significant development effort to build and train a custom
machine learning model. It would also require a large dataset of labeled images to train the model, which may be time-consuming and
expensive to obtain.
Option D, using AWS Fargate to deploy a custom machine learning model, would also require significant development effort and a large dataset
of labeled images. It may not be the most efficient or cost-effective solution for this use case.
In summary, the best solution is to use Amazon Rekognition to detect inappropriate content in images, and use human review for low-
confidence predictions to ensure that all inappropriate content is detected.
upvoted 6 times
Most Recent
1 week ago
Using Amazon Rekognition for content moderation is a cost-effective and efficient solution that reduces the need for developing and training
custom machine learning models, making it the best option in terms of minimizing development effort.
A. Amazon Comprehend is a natural language processing service provided by AWS, primarily focused on text analysis rather than image analysis.
C. Amazon SageMaker is a comprehensive machine learning service that allows you to build, train, and deploy custom machine learning models. It
requires significant development effort to build and train a custom model. In addition, utilizing ground truth to label low-confidence predictions
would further add to the development complexity and maintenance overhead.
D. Similar to C, using AWS Fargate to deploy a custom machine learning model requires significant development effort.
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: B
Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-trained label
detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent content, and offensive
language.
upvoted 1 times
Community vote distribution
B (100%)
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
B
AWS Rekognition to detect inappropriate content and use human review for low-confidence predictions. This option minimizes development effort
because Amazon Rekognition is a pre-built machine learning service that can detect inappropriate content. Using human review for low-confidence
predictions allows for more accurate detection of inappropriate content.
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
8 months, 1 week ago
Selected Answer: B
Option B.
https://docs.aws.amazon.com/rekognition/latest/dg/a2i-rekognition.html
upvoted 1 times
Topic 1
Question #58
A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus
on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying
infrastructure that runs the containerized workload.
What should a solutions architect do to meet these requirements?
A. Use Amazon EC2 instances, and install Docker on the instances.
B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
Good answer is C:
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without having to manage servers. AWS
Fargate is compatible with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
https://aws.amazon.com/fr/fargate/
upvoted 17 times
Most Recent
1 week ago
Selected Answer: C
Using ECS on Fargate allows you to run containers without the need to manage the underlying infrastructure. Fargate abstracts away the
underlying EC2 and provides serverless compute for containers.
A. This option would require manual provisioning and management of EC2, as well as installing and configuring Docker on those instances. It
would introduce additional overhead and responsibilities for maintaining the underlying infrastructure.
B. While this option leverages ECS to manage containers, it still requires provisioning and managing EC2 to serve as worker nodes. It adds
complexity and maintenance overhead compared to the serverless nature of Fargate.
D. This option still involves managing and provisioning EC2, even though an ECS-optimized AMI simplifies the process of setting up EC2 for
running ECS. It does not provide the level of serverless abstraction and ease of management offered by Fargate.
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: C
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2
instances.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
ECS + Fargate
upvoted 3 times
5 months, 4 weeks ago
Selected Answer: C
AWS Fargate will hide all the complexity for you
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
AWS Fargate is a fully managed container execution environment that runs containers without the need to provision and manage underlying
infrastructure. This makes it a good choice for companies that want to focus on maintaining their critical applications and do not want to be
responsible for provisioning and managing the underlying infrastructure.
Community vote distribution
C (100%)
Option A involves installing Docker on Amazon EC2 instances, which would still require the company to manage the underlying infrastructure.
Option B involves using Amazon ECS on Amazon EC2 worker nodes, which would also require the company to manage the underlying
infrastructure. Option D involves using Amazon EC2 instances from an Amazon ECS-optimized Amazon Machine Image (AMI), which would also
require the company to manage the underlying infrastructure.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
Obviously anything with EC2 in the answer is wrong...
upvoted 1 times
7 months ago
Selected Answer: C
The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload.
Fargate is serverless and no need to manage.
Answer: C
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
Agree Serverless Containerization Think Fargate
upvoted 2 times
8 months, 1 week ago
Selected Answer: C
Option C. Fargate is serverless, no need to manage the underlying infrastructure.
upvoted 4 times
Topic 1
Question #59
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream
data each day.
What should a solutions architect do to transmit and process the clickstream data?
A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate
analytics.
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to
use for analysis.
C. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket. run an AWS
Lambda function to process the data for analysis.
D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake.
Load the data in Amazon Redshift for analysis.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
Option D.
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 15 times
6 months, 3 weeks ago
Unsure if this is right URL for this scenario. Option D is referring to S3 and then Redshift. Whereas URL discuss about eliminating S3 :- We’re
excited to launch Amazon Redshift streaming ingestion for Amazon Kinesis Data Streams, which enables you to ingest data directly from the
Kinesis data stream without having to stage the data in Amazon Simple Storage Service (Amazon S3). Streaming ingestion allows you to achieve
low latency in the order of seconds while ingesting hundreds of megabytes of data into your Amazon Redshift cluster.
upvoted 1 times
Highly Voted
6 months, 1 week ago
Selected Answer: D
Option D is the most appropriate solution for transmitting and processing the clickstream data in this scenario.
Amazon Kinesis Data Streams is a highly scalable and durable service that enables real-time processing of streaming data at a high volume and
high rate. You can use Kinesis Data Streams to collect and process the clickstream data in real-time.
Amazon Kinesis Data Firehose is a fully managed service that loads streaming data into data stores and analytics tools. You can use Kinesis Data
Firehose to transmit the data from Kinesis Data Streams to an Amazon S3 data lake.
Once the data is in the data lake, you can use Amazon Redshift to load the data and perform analysis on it. Amazon Redshift is a fully managed,
petabyte-scale data warehouse service that allows you to quickly and efficiently analyze data using SQL and your existing business intelligence
tools.
upvoted 9 times
6 months, 1 week ago
Option A, which involves using AWS Data Pipeline to archive the data to an Amazon S3 bucket and running an Amazon EMR cluster with the
data to generate analytics, is not the most appropriate solution because it does not involve real-time processing of the data.
Option B, which involves creating an Auto Scaling group of Amazon EC2 instances to process the data and sending it to an Amazon S3 data lake
for Amazon Redshift to use for analysis, is not the most appropriate solution because it does not involve a fully managed service for
transmitting the data from the processing layer to the data lake.
Option C, which involves caching the data to Amazon CloudFront, storing the data in an Amazon S3 bucket, and running an AWS Lambda
function to process the data for analysis when an object is added to the S3 bucket, is not the most appropriate solution because it does not
involve a scalable and durable service for collecting and processing the data in real-time.
upvoted 2 times
Most Recent
1 week ago
Selected Answer: D
A. This option utilizes S3 for data storage and EMR for analytics, Data Pipeline is not ideal service for real-time streaming data ingestion and
processing. It is better suited for batch processing scenarios.
B. This option involves managing and scaling EC2, which adds operational overhead. It is also not real-time streaming solution. Additionally, use of
Community vote distribution
D (94%)
6%
Redshift for analyzing clickstream data might not be most efficient or cost-effective approach.
C. CloudFront is CDN service and is not designed for real-time data processing or analytics. While using Lambda to process data can be an option,
it may not be most efficient solution for processing large volumes of clickstream data.
Therefore, collecting the data from Kinesis Data Streams, using Kinesis Data Firehose to transmit it to S3 data lake, and loading it into Redshift for
analysis is the recommended approach. This combination provides scalable, real-time streaming solution with storage and analytics capabilities
that can handle high volume of clickstream data.
upvoted 2 times
1 month, 4 weeks ago
Clickstream is the key - Answer is D
upvoted 1 times
3 months ago
Selected Answer: A
I am going to be unpopular here and I'll go for A). Even if here are other services that offer a better experience, data Pipeline can do the job here.
"you can use AWS Data Pipeline to archive your web server's logs to Amazon Simple Storage Service (Amazon S3) each day and then run a weekly
Amazon EMR (Amazon EMR) cluster over those logs to generate traffic reports"
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html In the question there is no specific timing
requirement for analytics. Also the EMR cluster job can be scheduled be executed daily.
Option D) is a valid answer too, however with Amazon Redshift Streaming Ingestion "you can connect to Amazon Kinesis Data Streams data
streams and pull data directly to Amazon Redshift without staging data in S3" https://aws.amazon.com/redshift/redshift-streaming-ingestion. So in
this scenario Kinesis Data Firehose and S3 are redundant.
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
It is C.
The image in here https://aws.amazon.com/kinesis/data-firehose/ shows how kinesis can send data collected to firehose who can send it to
Redshift.
It is also possible to use an intermediary S3 bucket between firehose and redshift. See image in here
https://aws.amazon.com/blogs/big-data/stream-transform-and-analyze-xml-data-in-real-time-with-amazon-kinesis-aws-lambda-and-amazon-
redshift/
upvoted 1 times
6 months, 4 weeks ago
Why not A?
You can collect data with AWS Data Pipeline and then analyze it with EMR. Whats wrong with this option?
upvoted 4 times
6 months, 2 weeks ago
It's not A, the wording is tricky! It says "to archive the data to S3" - there is no mention of archiving in the question, so it has to be D :)
upvoted 2 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 3 weeks ago
Click Stream & Analyse/ process- Think KDS,
upvoted 2 times
8 months, 1 week ago
Selected Answer: D
D seems to make sense
upvoted 4 times
8 months, 1 week ago
Option D is correct... See the resource. Thank you Ariel
upvoted 1 times
Topic 1
Question #60
A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is con gured to handle HTTP and
HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.
What should a solutions architect do to meet this requirement?
A. Update the ALB's network ACL to accept only HTTPS tra c.
B. Create a rule that replaces the HTTP in the URL with HTTPS.
C. Create a listener rule on the ALB to redirect HTTP tra c to HTTPS.
D. Replace the ALB with a Network Load Balancer con gured to use Server Name Indication (SNI).
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
Answer C :
https://docs.aws.amazon.com/fr_fr/elasticloadbalancing/latest/application/create-https-listener.html
https://aws.amazon.com/fr/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/
upvoted 12 times
Highly Voted
6 months, 1 week ago
Selected Answer: C
C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
To meet the requirement of forwarding all requests to the website so that the requests will use HTTPS, a solutions architect can create a listener
rule on the ALB that redirects HTTP traffic to HTTPS. This can be done by creating a rule with a condition that matches all HTTP traffic and a rule
action that redirects the traffic to the HTTPS listener. The HTTPS listener should already be configured to accept HTTPS traffic and forward it to the
target group.
upvoted 10 times
6 months, 1 week ago
Option A. Updating the ALB's network ACL to accept only HTTPS traffic is not a valid solution because the network ACL is used to control
inbound and outbound traffic at the subnet level, not at the listener level.
Option B. Creating a rule that replaces the HTTP in the URL with HTTPS is not a valid solution because this would not redirect the traffic to the
HTTPS listener.
Option D. Replacing the ALB with a Network Load Balancer configured to use Server Name Indication (SNI) is not a valid solution because it
would not address the requirement to redirect HTTP traffic to HTTPS.
upvoted 8 times
Most Recent
1 week ago
Selected Answer: C
A. Network ACLs operate at subnet level and control inbound and outbound traffic. Updating the network ACL alone will not enforce the
redirection of HTTP to HTTPS.
B. This approach would require modifying application code or server configuration to perform URL rewrite. It is not an optimal solution as it adds
complexity and potential maintenance overhead. Moreover, it does not leverage the ALB's capabilities for handling HTTP-to-HTTPS redirection.
D. While NLB can handle SSL/TLS termination using SNI for routing requests to different services, replacing the ALB solely to enforce HTTP-to-
HTTPS redirection would be an unnecessary and more complex solution.
Therefore, the recommended approach is to create a listener rule on the ALB to redirect HTTP traffic to HTTPS. By configuring a listener rule, you
can define a redirect action that automatically directs HTTP requests to their corresponding HTTPS versions.
upvoted 3 times
1 month, 1 week ago
A solutions architect should create listen rules to direct http traffic to https.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: C
C is correct. Traffic redirection will solve it.
upvoted 2 times
Community vote distribution
C (100%)
3 months ago
Selected Answer: C
This rule can be created in the following way:
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Load Balancers.
3. Select the ALB and choose Listeners.
4. Choose View/edit rules and then choose Add rule.
5. In the Add Rule dialog box, choose HTTPS.
6. In the Default action dialog box, choose Redirect to HTTPS.
7. Choose Save rules.
This listener rule will redirect all HTTP requests to HTTPS, ensuring that all traffic is encrypted.
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: C
Configure an HTTPS listener on the ALB: This step involves setting up an HTTPS listener on the ALB and configuring the security policy to use a
secure SSL/TLS protocol and cipher suite.
Create a redirect rule on the ALB: The redirect rule should be configured to redirect all incoming HTTP requests to HTTPS. This can be done by
creating a redirect rule that redirects HTTP requests on port 80 to HTTPS requests on port 443.
Update the DNS record: The DNS record for the website should be updated to point to the ALB's DNS name, so that all traffic is routed through the
ALB.
Verify the configuration: Once the configuration is complete, the website should be tested to ensure that all requests are being redirected to
HTTPS. This can be done by accessing the website using HTTP and verifying that the request is redirected to HTTPS.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
C
To redirect HTTP traffic to HTTPS, a solutions architect should create a listener rule on the ALB to redirect HTTP traffic to HTTPS. Option A is not
correct because network ACLs do not have the ability to redirect traffic. Option B is not correct because it does not redirect traffic, it only replaces
the URL. Option D is not correct because a Network Load Balancer does not have the ability to handle HTTPS traffic.
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
8 months, 2 weeks ago
Selected Answer: C
Answer C: https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/
upvoted 4 times
Topic 1
Question #61
A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2
instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The
company must also implement a solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled
AWS Lambda function that updates the RDS credentials and instance metadata at the same time.
B. Store the database credentials in a con guration le in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch
Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the con guration le at the
same time. Use S3 Versioning to ensure the ability to fall back to previous values.
C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required
permission to the EC2 role to grant access to the secret.
D. Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the
encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
Secrets manager supports Autorotation unlike Parameter store.
upvoted 15 times
8 months, 1 week ago
Parameter store does not support autorotation.
upvoted 7 times
Highly Voted
6 months, 1 week ago
Selected Answer: C
The correct solution is C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the
required permission to the EC2 role to grant access to the secret.
AWS Secrets Manager is a service that enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets
throughout their lifecycle. By storing the database credentials as a secret in Secrets Manager, you can ensure that they are not hardcoded in the
application and that they are automatically rotated on a regular basis. To grant the EC2 instance access to the secret, you can attach the required
permission to the EC2 role. This will allow the application to retrieve the secret from Secrets Manager as needed.
upvoted 8 times
6 months, 1 week ago
Option A, storing the database credentials in the instance metadata and using a Lambda function to update them, would not meet the
requirement of not hardcoding the credentials in the application.
Option B, storing the database credentials in an encrypted S3 bucket and using a Lambda function to update them, would also not meet this
requirement, as the application would still need to access the credentials from the configuration file.
Option D, storing the database credentials as encrypted parameters in AWS Systems Manager Parameter Store, would also not meet this
requirement, as the application would still need to access the encrypted parameters in order to use them.
upvoted 5 times
Most Recent
1 week ago
Selected Answer: C
Storing the credentials in Secrets Manager provides dedicated and secure management. With automatic rotation enabled, Secrets Manager handles
the credential updates automatically. Attaching the necessary permissions to the EC2 role allows the application to securely access the secret.
This approach minimizes operational overhead and provides a secure and managed solution for credential management.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: C
The solution that meets the requirements with the least operational overhead, is option C.
upvoted 1 times
Community vote distribution
C (100%)
1 month, 1 week ago
Selected Answer: C
My choice is c.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: C
The right option is C.
upvoted 1 times
4 months, 3 weeks ago
C is the most correct answer. Automatic replacement must be performed by the secret manager.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C - As the requirement is to rotate the secrets Secrets manager is the one that can support it.
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 2 times
8 months, 1 week ago
Selected Answer: C
AWS Secrets Manager is a newer service than SSM Parameter store
upvoted 3 times
8 months, 1 week ago
Selected Answer: C
Option C.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
upvoted 2 times
Topic 1
Question #62
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application
needs to be encrypted at the edge with an SSL/TLS certi cate that is issued by an external certi cate authority (CA). The certi cate must be
rotated each year before the certi cate expires.
What should a solutions architect do to meet these requirements?
A. Use AWS Certi cate Manager (ACM) to issue an SSL/TLS certi cate. Apply the certi cate to the ALB. Use the managed renewal feature to
automatically rotate the certi cate.
B. Use AWS Certi cate Manager (ACM) to issue an SSL/TLS certi cate. Import the key material from the certi cate. Apply the certi cate to
the ALUse the managed renewal feature to automatically rotate the certi cate.
C. Use AWS Certi cate Manager (ACM) Private Certi cate Authority to issue an SSL/TLS certi cate from the root CA. Apply the certi cate to
the ALB. Use the managed renewal feature to automatically rotate the certi cate.
D. Use AWS Certi cate Manager (ACM) to import an SSL/TLS certi cate. Apply the certi cate to the ALB. Use Amazon EventBridge (Amazon
CloudWatch Events) to send a noti cation when the certi cate is nearing expiration. Rotate the certi cate manually.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
It's a third-party certificate, hence AWS cannot manage renewal automatically. The closest thing you can do is to send a notification to renew the
3rd party certificate.
upvoted 28 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: D
It is D, because ACM does not manage the renewal process for imported certificates. You are responsible for monitoring the expiration date of your
imported certificates and for renewing them before they expire.
Check this question on the link below:
Q: What types of certificates can I create and manage with ACM?
https://www.amazonaws.cn/en/certificate-manager/faqs/#Managed_renewal_and_deployment
upvoted 16 times
Most Recent
1 week ago
Selected Answer: D
D: With this approach, you import the third-party certificate into ACM, which allows you to centrally manage and apply it to the ALB. By configuring
CloudWatch Events, you can receive notifications when the certificate is close to expiring, prompting you to manually initiate the rotation process.
A & B: These options assume that the SSL/TLS certificate can be issued directly by ACM. However, since the requirement specifies that the
certificate should be issued by an external certificate authority (CA), this option is not suitable.
C: ACM Private Certificate Authority is used when you want to create your own private CA and issue certificates from it. It does not support
certificates issued by external CAs. Therefore, this option is not suitable for the given requirement.
upvoted 2 times
1 week, 5 days ago
D is correct, since it's an external certificate
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: D
Option D meets these requirements.
upvoted 1 times
1 month, 1 week ago
Since the external certificate, you can't automate it. Only u can do is getting notefication, and renew it manually, no other way roud.
upvoted 1 times
1 month, 1 week ago
In the question it mentions that it's a third-party certificate. AWS has not got much control of third-party certificates and cannot manage renewal
automatically. The closest thing you can do is to send a notification to renew the 3rd party certificate.
upvoted 1 times
Community vote distribution
D (95%)
5%
1 month, 4 weeks ago
EXTERNAL certofocation is the key - Manual rotation is required so Answer is D
upvoted 3 times
2 months, 1 week ago
Selected Answer: D
A B and C are all using AWS issued cert. Only D uses cert issued by external CA, which meets the requirement.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Key word: External CA -> manually
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: D
D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon
CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
This option meets the requirements because it uses an SSL/TLS certificate issued by an external CA and involves a manual rotation process that can
be done yearly before the certificate expires. The other options involve using AWS Certificate Manager to issue the certificate, which does not meet
the requirement of using an external CA.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: D
Option D. ACM cannot automatically renew imported certificates.
upvoted 1 times
6 months, 1 week ago
D
https://aws.amazon.com/certificate-manager/faqs/
Imported certificates – If you want to use a third-party certificate with Amazon CloudFront, Elastic Load Balancing, or Amazon API Gateway, you
may import it into ACM using the AWS Management Console, AWS CLI, or ACM APIs. ACM can not renew imported certificates, but it can help you
manage the renewal process. You are responsible for monitoring the expiration date of your imported certificates and for renewing them before
they expire. You can use ACM CloudWatch metrics to monitor the expiration dates of an imported certificates and import a new third-party
certificate to replace an expiring one.
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
The correct answer is A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed
renewal feature to automatically rotate the certificate.
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy Secure Sockets Layer/Transport Layer Security
(SSL/TLS) certificates for use with AWS resources. ACM provides managed renewal for SSL/TLS certificates, which means that ACM automatically
renews your certificates before they expire.
To meet the requirements for the web application, you should use ACM to issue an SSL/TLS certificate and apply it to the Application Load Balancer
(ALB). Then, you can use the managed renewal feature to automatically rotate the certificate each year before it expires. This will ensure that the
web application is always encrypted at the edge with a valid SSL/TLS certificate.
upvoted 2 times
6 months ago
I am taking back my answer after reading the AWS documentation. The correct answer is Option D. Use AWS Certificate Manager (ACM) to
import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification
when the certificate is nearing expiration. Rotate the certificate manually.
https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html
upvoted 3 times
5 months, 4 weeks ago
That is not good, because you are applying a new cert from AWS and discard the still valid cert from 3rd party, there might reason that they still
want to use the 3rd party cert
upvoted 1 times
6 months ago
NOT ELIGIBLE if it is a private certificate issued by calling the AWS Private CA IssueCertificate API.
NOT ELIGIBLE if imported.
NOT ELIGIBLE if already expired.
upvoted 1 times
6 months, 1 week ago
Option D, using ACM to import an SSL/TLS certificate and manually rotating the certificate, would not meet the requirement to rotate the
certificate before it expires each year.
Option C, using ACM Private Certificate Authority, is not necessary in this scenario because the requirement is to use a certificate issued by an
external certificate authority.
Option B, importing the key material from the certificate, is not a valid option because ACM does not allow you to import key material for
SSL/TLS certificates.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
Key phrase; external cert
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 2 times
Topic 1
Question #63
A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company
intends to create a product that converts large .pdf les to .jpg image les. The .pdf les average 5 MB in size. The company needs to store the
original les and the converted les. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over
time.
Which solution meets these requirements MOST cost-effectively?
A. Save the .pdf les to Amazon S3. Con gure an S3 PUT event to invoke an AWS Lambda function to convert the les to .jpg format and store
them back in Amazon S3.
B. Save the .pdf les to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the les to .jpg
format and store them back in DynamoDB.
C. Upload the .pdf les to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon
EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the les to .jpg format. Save the .pdf les and the .jpg
les in the EBS store.
D. Upload the .pdf les to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon
EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the le to .jpg format. Save the .pdf les and the .jpg
les in the EBS store.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
Option A. Elastic BeanStalk is expensive, and DocumentDB has a 400KB max to upload files. So Lambda and S3 should be the one.
upvoted 33 times
6 months, 1 week ago
I'm thinking when you wrote DocumentDB you meant it as DynamoDB...yes?
upvoted 2 times
6 months ago
Yes, DynamoDB has 400KB limit for the item.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html
upvoted 3 times
7 months, 4 weeks ago
In addition to this Lambda is paid only when used....
upvoted 5 times
8 months ago
is lambda scalable as an EC2 ?
upvoted 4 times
Most Recent
1 week ago
Selected Answer: A
B. Using DynamoDB for storing and processing large .pdf files would not be cost-effective due to storage and throughput costs associated with
DynamoDB.
C. Using Elastic Beanstalk with EC2 and EBS storage can work, but it may not be most cost-effective solution. It involves managing the underlying
infrastructure and scaling manually.
D. Similar to C, using Elastic Beanstalk with EC2 and EFS storage can work, but it may not be most cost-effective solution. EFS is a shared file
storage service and may not provide optimal performance for conversion process, especially as demand and file sizes increase.
A. leverages Lambda and the scalable and cost-effective storage of S3. With Lambda, you only pay for actual compute time used during the file
conversion, and S3 provides durable and scalable storage for both .pdf files and .jpg files. The S3 PUT event triggers Lambda to perform
conversion, eliminating need to manage infrastructure and scaling, making it most cost-effective solution for this scenario.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: A
The solution meets these requirements most cost-effectively is option A.
Community vote distribution
A (98%)
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
I think the best solution is A.
Ref. https://s3.amazonaws.com/doc/s3-developer-guide/RESTObjectPUT.html
upvoted 1 times
1 month, 1 week ago
Since this requires a cost-effect solution then you can use Lambda to convert pdf files to jpeg and store them on S3. Lambda is serverless, so only
pay when you use it and automatically scales to cope with demand.
upvoted 1 times
2 months ago
if Option A is correct, however storing the data back to the same S3, wont it cause infinite looping, it's not best practice right storing a object that
is processed by Lambda function to the same S3 bucket, it has chances to cause infinite Loop and then if the option B cant we increase the limits of
Dynamo DB requesting AWS?
upvoted 2 times
2 months ago
In question, it is never mentioned that the jpg files will also be stored in same s3 bucket. We can have different s3 buckets right ?
upvoted 2 times
2 months, 1 week ago
Selected Answer: A
Answer A is the most cost effective solution that meets the requirement
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Key words: MOST cost-effectively, so S3 + Lambda
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
This solution will meet the company's requirements in a cost-effective manner because it uses a serverless architecture with AWS Lambda to
convert the files and store them in S3. The Lambda function will automatically scale to meet the demand for file conversions and S3 will
automatically scale to store the original and converted files as needed.
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
Option A is the most cost-effective solution that meets the requirements.
In this solution, the .pdf files are saved to Amazon S3, which is an object storage service that is highly scalable, durable, and secure. S3 can store
unlimited amounts of data at a very low cost.
The S3 PUT event triggers an AWS Lambda function to convert the .pdf files to .jpg format. Lambda is a serverless compute service that runs code
in response to specific events and automatically scales to meet demand. This means that the conversion process can scale up or down as needed,
without the need for manual intervention.
The converted .jpg files are then stored back in S3, which allows the company to store both the original .pdf files and the converted .jpg files in the
same service. This reduces the complexity of the solution and helps to keep costs low.
upvoted 1 times
6 months, 1 week ago
Option C is also a valid solution, but it may be more expensive due to the use of EC2 instances, EBS storage, and an Auto Scaling group. These
resources can add additional cost, especially if the demand for the conversion service grows rapidly.
Option D is not a valid solution because it uses Amazon EFS, which is a file storage service that is not suitable for storing large amounts of data.
EFS is designed for storing and accessing files that are accessed frequently, such as application logs and media files. It is not designed for
storing large files like .pdf or .jpg files.
upvoted 2 times
5 months, 3 weeks ago
EFS is optimized for a wide range of workloads and file sizes, and it can store files of any size up to the capacity of the file system. EFS scales
automatically to meet your storage needs, and it can store petabyte-level capacity.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 3 weeks ago
This gives an example, using GET rather than PUT, but the idea is the same: https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-
object-lambda-uppercase.html
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 2 weeks ago
S3 is cost effective
upvoted 1 times
8 months ago
Selected Answer: B
For rapid scalability, B - DynamoDB looks to be a better solution.
upvoted 1 times
7 months, 4 weeks ago
It is not correct because the maximum item size in DynamoDB is 400 KB.
upvoted 11 times
Topic 1
Question #64
A company has more than 5 TB of le data on Windows le servers that run on premises. Users and applications interact with the data each day.
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-
premises le storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no signi cant
changes to the existing le access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS.
What should a solutions architect do to meet these requirements?
A. Deploy and con gure Amazon FSx for Windows File Server on AWS. Move the on-premises le data to FSx for Windows File Server.
Recon gure the workloads to use FSx for Windows File Server on AWS.
B. Deploy and con gure an Amazon S3 File Gateway on premises. Move the on-premises le data to the S3 File Gateway. Recon gure the on-
premises workloads and the cloud workloads to use the S3 File Gateway.
C. Deploy and con gure an Amazon S3 File Gateway on premises. Move the on-premises le data to Amazon S3. Recon gure the workloads to
use either Amazon S3 directly or the S3 File Gateway. depending on each workload's location.
D. Deploy and con gure Amazon FSx for Windows File Server on AWS. Deploy and con gure an Amazon FSx File Gateway on premises. Move
the on-premises le data to the FSx File Gateway. Con gure the cloud workloads to use FSx for Windows File Server on AWS. Con gure the on-
premises workloads to use the FSx File Gateway.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/83281-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 15 times
Highly Voted
8 months, 1 week ago
Selected Answer: D
https://docs.aws.amazon.com/filegateway/latest/filefsxw/what-is-file-fsxw.html
upvoted 6 times
6 months, 2 weeks ago
From that shared doc: "Amazon FSx File Gateway (FSx File Gateway) is a new File Gateway type that provides low latency and efficient access to
in-cloud FSx for Windows File Server file shares from your on-premises facility. If you maintain on-premises file storage because of latency or
bandwidth requirements, you can instead use FSx File Gateway for seamless access to fully managed, highly reliable, and virtually unlimited
Windows file shares provided in the AWS Cloud by FSx for Windows File Server."
upvoted 6 times
Most Recent
1 week ago
Selected Answer: D
Amazon FSx File Gateway (FSx File Gateway) is a new File Gateway type that provides low latency and efficient access to in-cloud FSx for Windows
File Server file shares from your on-premises facility. If you maintain on-premises file storage because of latency or bandwidth requirements, you
can instead use FSx File Gateway for seamless access to fully managed, highly reliable, and virtually unlimited Windows file shares provided in the
AWS Cloud by FSx for Windows File Server.
FSx File Gateway provides the following benefits:
1. Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud storage.
2. Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.
3. Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS, without
taxing your networks or impacting the latencies experienced by your most demanding applications.
upvoted 2 times
3 weeks, 2 days ago
Selected Answer: D
Option D meets these requirements.
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
D is correct
https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/
upvoted 1 times
Community vote distribution
D (84%)
Other
1 month, 4 weeks ago
Amazon Fix File Gateway for low latency and efficient access to in-cloud FSx for windows File server.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: D
https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
1.you cannot move on-prem files to FSX FGW as it has limited storage and is is being used for caching only.
2.you need to migrate the on-prem file server to aws fsx file server for windows and let on prem users access the file server through sfx FGW.
3.configure apps to use aws file server for on-prem apps
4.configure aws apps to access fsx files directly through the app.
5. A is the correct answer.
upvoted 2 times
5 months ago
Selected Answer: D
the company stated that they wanted to move the data from onprem to AWS with 'low latency' and 'no changes on current file access patterns', so
FSx File Gateway is still needed in onprem to cache the data and then to the cloud, plus a secured data/file move. The Site2Site VPN is for users
accessing the data from onprem and cloud within premise network.
Check on the Conclusion section for summary: https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-
gateway/
upvoted 2 times
5 months, 1 week ago
D IS WRONG - Its used for caching. you cannot 'Move the on-premises file data to the FSx File Gateway.' which is stated in answer D. It pretty sure
AWS employee's are spamming this site with the wrong answers intentionally.
upvoted 5 times
5 months, 3 weeks ago
Selected Answer: D
This solution will meet the requirements because it allows the company to continue using a file server with minimal changes to the existing file
access patterns. FSx for Windows File Server integrates with the on-premises Active Directory, so users can continue accessing the file data with
their existing credentials. The Site-to-Site VPN connection can be used to establish low-latency connectivity between the on-premises file servers
and FSx for Windows File Server on AWS. FSx for Windows File Server is also highly available and scalable, so it can handle the workloads' file
storage needs.
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: D
FSx is for windows file, other options like S3 certainly can handle files but might bring compatibility issue. and a FSx gateway might have sort of
cache mechanism that make the users feel they are accessing local file system.
upvoted 2 times
6 months ago
Benefits of using Amazon FSx File Gateway ****WINDOWS FILE SERVERS***
FSx File Gateway provides the following benefits:
Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud storage.
Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.
Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS, without taxing
your networks or impacting the latencies experienced by your most demanding applications.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
I think it is C. To meet these requirements, the solutions architect could recommend using AWS Storage Gateway to provide file-based storage
access between the on-premises file servers and AWS.
AWS Storage Gateway is a hybrid storage service that connects on-premises storage environments with AWS storage infrastructure. It provides low-
latency file-based storage access to AWS, enabling users and applications to access data in AWS as if it were stored on-premises.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
The correct solution is C. Deploy and configures an Amazon S3 File Gateway on-premises. Move the on-premises file data to Amazon S3.
Reconfigure the workloads to use either Amazon S3 directly or the S3 File Gateway, depending on each workload's location.
Amazon S3 is a highly durable and scalable object storage service that is well-suited for storing large amounts of file data. By moving the on-
premises file data to Amazon S3, you can take advantage of its durability, scalability, and global availability, while still allowing users and
applications to access the data using their existing file access patterns.
The Amazon S3 File Gateway can be deployed on-premises and configured to provide file-based access to data stored in Amazon S3. This allows
users and applications to access the data stored in Amazon S3 as if it were stored on a local file server, while still taking advantage of the benefits
of storing the data in Amazon S3.
upvoted 1 times
6 months, 1 week ago
Option A, deploying and configuring Amazon FSx for Windows File Server on AWS, would not meet the requirement to minimize operational
overhead, as it would require significant changes to the existing file access patterns.
Option B, deploying and configuring an Amazon S3 File Gateway on-premises and moving the on-premises file data to the S3 File Gateway,
would not meet the requirement to minimize operational overhead, as it would require significant changes to the existing file access patterns.
Option D, deploying and configuring Amazon FSx for Windows File Server on AWS and an Amazon FSx File Gateway on-premises, would not
meet the requirement to minimize operational overhead, as it would require significant changes to the existing file access patterns.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Answer C
Option C will provide low-latency access to the file data from both on-premises and AWS environments, and it will minimize operational overhead
by requiring no significant changes to the existing file access patterns. Additionally, the use of the AWS Site-to-Site VPN connection will ensure
secure and seamless connectivity between the on-premises and AWS environments. Option A is not correct because it only addresses the
requirement to access file data on AWS, but it does not address the requirement to access file data on premises with minimal latency.Option D is
not correct because it involves deploying and configuring two different file storage services (FSx for Windows File Server and FSx File Gateway),
which would add complexity and operational overhead. It also does not provide a solution for accessing file data on premises with minimal latency.
upvoted 2 times
6 months, 2 weeks ago
"the company requires access to AWS and on-premises file storage" C is excluding on premises needs.
upvoted 1 times
Topic 1
Question #65
A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload
reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in
the reports.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.
B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text.
C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
D. Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
Correct Answer:
C
Highly Voted
6 months, 1 week ago
Selected Answer: C
The correct solution is C: Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the
extracted text.
Option C: Using Amazon Textract to extract the text from the reports, and Amazon Comprehend Medical to identify the PHI from the extracted text,
would be the most efficient solution as it would involve the least operational overhead. Textract is specifically designed for extracting text from
documents, and Comprehend Medical is a fully managed service that can accurately identify PHI in medical text. This solution would require
minimal maintenance and would not incur any additional costs beyond the usage fees for Textract and Comprehend Medical.
upvoted 10 times
6 months, 1 week ago
Option A: Using existing Python libraries to extract the text and identify the PHI from the text would require the hospital to maintain and update
the libraries as needed. This would involve operational overhead in terms of keeping the libraries up to date and debugging any issues that may
arise.
Option B: Using Amazon SageMaker to identify the PHI from the extracted text would involve additional operational overhead in terms of
setting up and maintaining a SageMaker model, as well as potentially incurring additional costs for using SageMaker.
Option D: Using Amazon Rekognition to extract the text from the reports would not be an effective solution, as Rekognition is primarily
designed for image recognition and would not be able to accurately extract text from PDF or JPEG files.
upvoted 4 times
Most Recent
1 week ago
Selected Answer: C
C leverages capabilities of Textract, which is a service that automatically extracts text and data from documents, including PDF and JPEG. By using
Textract, hospital can extract text content from reports without need for additional custom code or libraries.
Once text is extracted, hospital can then use Comprehend Medical, a natural language processing service specifically designed for medical text, to
analyze and identify PHI. It can recognize medical entities such as medical conditions, treatments, and patient information.
A. suggests using existing Python libraries, which would require hospital to develop and maintain custom code for text extraction and PHI
identification.
B and D involve using Textract along with SageMaker or Rekognition, respectively, for PHI identification. While these options could work, they
introduce additional complexity by incorporating machine learning models and training.
upvoted 2 times
2 months, 3 weeks ago
Key word: hospital!
upvoted 1 times
3 months ago
Answer C:
upvoted 1 times
6 months ago
Selected Answer: C
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
upvoted 3 times
Community vote distribution
C (100%)
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
WHY OPTION D IS WRONG
upvoted 1 times
5 months, 1 week ago
B/C you use TextTract to extract text not Rekognition.
upvoted 1 times
6 months ago
D is wrong only because Amazon Rekognition doesn't read text, only explicit image contents.
upvoted 3 times
6 months, 2 weeks ago
Selected Answer: C
Agreed
upvoted 1 times
6 months, 4 weeks ago
C is correct
Textract- for extracting the text and Comprehend to identify the medical info
https://aws.amazon.com/comprehend/medical/
upvoted 3 times
7 months, 1 week ago
C is correct
upvoted 1 times
8 months, 1 week ago
Selected Answer: C
Textract -to extract textand Comprehend -to identify Medical info
upvoted 3 times
8 months, 1 week ago
Textract and Comprehend is HIPPA compliant
https://aws.amazon.com/blogs/machine-learning/amazon-textract-is-now-hipaa-eligible/
upvoted 1 times
8 months, 2 weeks ago
Selected Answer: C
Textract - Comprehend Medical for PHI info
upvoted 3 times
Topic 1
Question #66
A company has an application that generates a large number of les, each approximately 5 MB in size. The les are stored in Amazon S3.
Company policy requires the les to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the les
contain critical business data that is not easy to reproduce. The les are frequently accessed in the rst 30 days of the object creation but are
rarely accessed after the rst 30 days.
Which storage solution is MOST cost-effective?
A. Create an S3 bucket lifecycle policy to move les from S3 Standard to S3 Glacier 30 days from object creation. Delete the les 4 years after
object creation.
B. Create an S3 bucket lifecycle policy to move les from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from
object creation. Delete the les 4 years after object creation.
C. Create an S3 bucket lifecycle policy to move les from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Delete the les 4 years after object creation.
D. Create an S3 bucket lifecycle policy to move les from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object
creation. Move the les to S3 Glacier 4 years after object creation.
Correct Answer:
C
Highly Voted
8 months, 1 week ago
Selected Answer: C
i think C should be the answer here,
> Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce
If they do not explicitly mention that they are using Glacier Instant Retrieval, we should assume that Glacier -> takes more time to retrieve and may
not meet the requirements
upvoted 52 times
1 month ago
Yeah, Correct answer is C
Because even if you assume the glacier class as Instant Retrieval. As per the Instant Retrieval class the immediate availability is only once per
quarter. But in question it is clearly mentioned that the files should be immediately available anytime.
upvoted 1 times
6 months, 3 weeks ago
You can make that assumption, but I think it would be wrong to make it. It does not state they are not using Glacier Instant Retrieval, and it's
use would be the logical choice in this question, so I'm going for A
upvoted 4 times
6 months, 2 weeks ago
I think his assumption is correct because if you go to AWS documentation (https://aws.amazon.com/s3/storage-classes/glacier/) they clearly
mention: "S3 Glacier Flexible Retrieval (formerly S3 Glacier)". So since this question doesn't specify the S3 Glacier class, then it would default
to flexible retrieval (which ofc is not equal to Instant Retrieval).
upvoted 9 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Most COST EFFECTIVE
A: S3 Glacier Instant Retrieval is a new storage class that delivers the fastest access to archive storage, with the same low latency and high-
throughput performance as the S3 Standard and S3 Standard-IA storage classes. You can save up to 68 percent on storage costs as compared with
using the S3 Standard-IA storage class when you use the S3 Glacier Instant Retrieval storage class and pay a low price to retrieve data.
upvoted 16 times
4 months, 1 week ago
Would agree if that was one of the answers, however many questions that are asked do have alternative solutions but again they are doing this
on purpose to check your knowledge. Here C is best.
upvoted 1 times
6 months, 2 weeks ago
In the other hand, you need to chose a tier when going for glacier, so my previous comment is not stating well. The question is tricky, I change
my mind: agree with you on this one
upvoted 2 times
Community vote distribution
C (70%)
A (19%)
11%
6 months, 2 weeks ago
Instant Retrieval was never mentioned. The exams always mention the tier when needed to. To be A the answer given should at least include the
step mentioning that instant retrieval would be used.
upvoted 6 times
7 months, 2 weeks ago
"Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce" is the key sentence. answer is
C.
upvoted 5 times
4 months, 4 weeks ago
I agree with your key sentence..but the one zone infrequent doesn't fit for critical business and it is used for recreate..
upvoted 1 times
6 months, 3 weeks ago
But S3 Glacier Instant Retrieval "is designed for rarely accessed data that still needs immediate access in performance-sensitive use cases", so
it offers lower cost and instant retrieval, so A
upvoted 1 times
Most Recent
1 week ago
Selected Answer: C
In this option, the company utilizes the S3 bucket lifecycle policy to transition the files from the S3 Standard storage class to the S3 Standard-IA
storage class after 30 days. S3 Standard-IA is designed for infrequently accessed data and offers a lower storage cost compared to S3 Standard,
making it cost-effective for files that are rarely accessed after the initial 30 days.
upvoted 2 times
4 weeks ago
Selected Answer: A
A: With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-
IA) storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the
same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes.
upvoted 1 times
1 month, 1 week ago
BBBBBBBBBBBBBBB
upvoted 1 times
1 month, 1 week ago
TRICKY QUESTION! - it mentions immediate accessibility, and even though S3 Glacier Instant Retrieval (formerly S3 Glacier) delivers the lowest cost
storage with milliseconds retrieval in "minutes" or free bulk retrievals in 5-12 hours. However "Minutes" is not immediate accessibility.
upvoted 2 times
1 month, 1 week ago
C is correct
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: C
can't assume glacier instant retreival
upvoted 2 times
1 month, 4 weeks ago
Immediate accessibility is the key - Glacier takes time - 7 days. SO C is the answer
upvoted 1 times
2 months ago
Selected Answer: C
Keyword: 1 - Immediate accessibility is always required. 2 - The files are frequently accessed in the first 30 days of the object creation but are rarely
accessed after the first 30 days.
upvoted 2 times
2 months, 1 week ago
Selected Answer: B
one zone is more cost effective
upvoted 2 times
1 month, 2 weeks ago
Because the files contain critical business data, storing them in one zone doesn't guarantee high availability.
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
My answer is A since the question ask for the MOST cost effective solution. Amazon S3 Glacier Flexible Retrieval will meet the immediate
accessibility requirement.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
Immediate accessibility is always required - Infrequent Access is for data that is less frequently accessed, but requires *rapid access when needed*.
Files contain critical business data that is not easy to reproduce so S3 One Zone-IA is not a choice
The files are frequently accessed in the first 30 days - S3 Standard
Files are rarely accessed after the first 30 days (need Immediate accessibility is always required) so S3 Standard-IA.
****
Amazon S3 Glacier Instant Retrieval - Millisecond retrieval, great for data accessed ONCE a quarter, Minimum storage duration of 90 DAYS
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: C
Immediate accessibility is always required , so not Glacier, so option C.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Vote for A as 'Immediate accessibility is always required'.
upvoted 1 times
2 months, 3 weeks ago
sorry i mean to choose 'C'.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: C
S3 Standard-Infrequent Access (S3 Standard-IA) is a lower-cost storage class than S3 Standard and is designed for data that is accessed less
frequently but still requires immediate access when needed. S3 Standard-IA offers the same low latency and high throughput performance as S3
Standard but at a lower cost.
The files are frequently accessed in the first 30 days of object creation but are rarely accessed after the first 30 days. Therefore, moving the files to
S3 Standard-IA after 30 days will significantly reduce storage costs without sacrificing immediate accessibility.
Deleting the files 4 years after object creation complies with company policy and ensures that the company is not storing data longer than
necessary, which can help reduce storage costs.
upvoted 1 times
3 months, 2 weeks ago
Option A involves moving the files to S3 Glacier, which is a cheaper storage class but incurs additional retrieval costs and has a longer retrieval
time. Since immediate accessibility is always required, this option may not be the best choice.
upvoted 2 times
3 months, 2 weeks ago
Think c should be the answer.
upvoted 1 times
Topic 1
Question #67
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to
an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not
contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
A. Use the CreateQueue API call to create a new queue.
B. Use the AddPermission API call to add appropriate permissions.
C. Use the ReceiveMessage API call to set an appropriate wait time.
D. Use the ChangeMessageVisibility API call to increase the visibility timeout.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
In case of SQS - multi-consumers if one consumer has already picked the message and is processing, in meantime other consumer can pick it up
and process the message there by two copies are added at the end. To avoid this the message is made invisible from the time its picked and
deleted after processing. This visibility timeout is increased according to max time taken to process the message
upvoted 30 times
6 months, 3 weeks ago
To add to this "The VisibilityTimeout in SQS is a time frame that the message can be hidden so that no others can consume it except the first
consumer who calls the ReceiveMessageAPI." The API ChangeMesssageVisibility changes this value.
upvoted 8 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
True, it's D.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
upvoted 6 times
Most Recent
1 week ago
Selected Answer: D
The visibility timeout is the duration during which SQS prevents other consumers from receiving and processing the same message. By increasing
the visibility timeout, you allow more time for the processing of a message to complete before it becomes visible to other consumers.
Option A, creating a new queue, does not address the issue of concurrent processing and duplicate records. It would only create a new queue,
which is not necessary for solving the problem.
Option B, adding permissions, also does not directly address the issue of duplicate records. Permissions are necessary for accessing the SQS queue
but not for preventing concurrent processing.
Option C, setting an appropriate wait time using the ReceiveMessage API call, does not specifically prevent duplicate records. It can help manage
the rate at which messages are received from the queue but does not address the issue of concurrent processing.
upvoted 2 times
2 months, 1 week ago
Selected Answer: D
D is correct
upvoted 1 times
3 months ago
Answer D:
visibility timeout beings when amazon SQS return a message
upvoted 1 times
3 months, 1 week ago
Selected Answer: D
D = ChangeMessageVisibility
upvoted 1 times
5 months, 1 week ago
Community vote distribution
D (100%)
In theory, between reception and changing visibility, you can have multiple consumers. Question is not good as it won't guarantee not executing
twice.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
Increaseing visibility timeout makes sure message is not visible for time taken to process the message.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
To ensure that messages are being processed only once, a solutions architect should use the ChangeMessageVisibility API call to increase the
visibility timeout which is Option D.
The visibility timeout determines the amount of time that a message received from an SQS queue is hidden from other consumers while the
message is being processed. If the processing of a message takes longer than the visibility timeout, the message will become visible to other
consumers and may be processed again. By increasing the visibility timeout, the solutions architect can ensure that the message is not made visible
to other consumers until the processing is complete and the message can be safely deleted from the queue.
Option A (Use the CreateQueue API call to create a new queue) would not address the issue of duplicate message processing.
Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue.
Option C (Use the ReceiveMessage API call to set an appropriate wait time) is also not relevant to this issue.
upvoted 5 times
5 months, 3 weeks ago
not relevant to this issue. ??? what is added value
upvoted 2 times
4 months ago
Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue because it deals with setting
permissions for accessing an SQS queue, which is not related to preventing duplicate records in the RDS table.
Option C (Use the ReceiveMessage API call to set an appropriate wait time) is not relevant to this issue because it is related to configuring
how long the ReceiveMessage API call should wait for new messages to arrive in the SQS queue before returning an empty response. It does
not address the issue of duplicate records in the RDS table.
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: D
D is the correct choise, increasing the visibility timeout according to max time taken to process the message on the RDS.
upvoted 1 times
Topic 1
Question #68
A solutions architect is designing a new hybrid architecture to extend a company's on-premises infrastructure to AWS. The company requires a
highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower
tra c if the primary connection fails.
What should the solutions architect do to meet these requirements?
A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection
fails.
B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a
backup if the primary VPN connection fails.
C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if
the primary Direct Connect connection fails.
D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create
a backup connection if the primary Direct Connect connection fails.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Direct Connect + VPN best of both
upvoted 12 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: A
Direct Connect goes throught 1 Gbps, 10 Gbps or 100 Gbps and the VPN goes up to 1.25 Gbps.
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 10 times
Most Recent
1 week ago
Selected Answer: A
Options B and C propose using multiple VPN connections for private connectivity and as backups. While VPNs can serve as backups, they may not
provide the same level of consistent low latency and high availability as Direct Connect connections. Additionally, provisioning multiple VPN
tunnels can increase operational complexity and costs.
Option D suggests using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct
Connect connection fails. While this approach can be automated, it does not provide the same level of immediate failover capabilities as having a
separate backup connection in place.
Therefore, option A, provisioning an AWS Direct Connect connection to a Region and provisioning a VPN connection as a backup, is the most
suitable solution that meets the company's requirements for connectivity, cost-effectiveness, and high availability.
upvoted 2 times
2 months, 1 week ago
Selected Answer: A
higly available - > direct connect beecause connection can go up to 10GBPs and VPN 1.5GBPs as backup
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
Option A is the correct solution to meet the requirements of the company. Provisioning an AWS Direct Connect connection to a Region will provide
a private and dedicated connection with consistent low latency. As the company requires a highly available connection, a VPN connection can be
provisioned as a backup if the primary Direct Connect connection fails. This approach will minimize costs and provide the required level of
availability.
upvoted 1 times
4 months, 4 weeks ago
Selected Answer: A
With AWS Direct Connect + VPN, you can combine AWS Direct Connect dedicated network connections with the Amazon VPC VPN. This solution
combines the benefits of the end-to-end secure IPSec connection with low latency and increased bandwidth of the AWS Direct Connect to provide
a more consistent network experience than internet-based VPN connections.
Community vote distribution
A (88%)
12%
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
upvoted 2 times
5 months, 1 week ago
Why not B? Two VPNs on different connections? Direct Connect costs a fortune?
upvoted 1 times
5 months, 1 week ago
The company requires a highly available connection with consistent low latency to an AWS Region, this is provided by Direct Connect as primary
connection. The company allows a slower connection only for the backup option, so A is the right answer
upvoted 2 times
6 months ago
DX for low latency connect and the company accept slower traffic if the primary connection fails. So we should choose VPN for backup purpose.
And the question also mark : minimize cost.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
This a tricky question but let's try to understand the requirements of the question.
The company requires VS The company needs.
The main difference between need and require is that needs are goals and objectives a business must achieve, whereas require or requirements are
the things we need to do in order to achieve a need.
upvoted 2 times
6 months, 1 week ago
To meet the requirements specified in the question, the best solution is to provision two AWS Direct Connect connections to the same Region.
This will provide a highly available connection with consistently low latency to the AWS Region and minimize costs by eliminating internet usage
fees. Provisioning a second Direct Connect connection as a backup will ensure that there is a failover option available in case the primary
connection fails.
upvoted 4 times
1 month, 3 weeks ago
2 Direct connections will not minimize costs. Correct Answer is A
upvoted 1 times
6 months, 1 week ago
Using VPN connections as a backup, as described in options A and B, is not the best solution because VPN connections are typically slower
and less reliable than Direct Connect connections. Additionally, having two VPN connections to the same Region may not provide the
desired level of availability and may not meet the company's requirement for low latency.
Option D, which involves using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the
primary Direct Connect connection fails, is not a valid option because the Direct Connect failover attribute is not available in the AWS CLI.
upvoted 6 times
1 month, 1 week ago
You forgot to consider that "the company is willing to accept slower traffic if the primary connection fails", so option A is the best answer
upvoted 1 times
6 months, 1 week ago
See pricing for more info.
https://aws.amazon.com/directconnect/pricing/
upvoted 1 times
5 months ago
I love your comments!
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
7 months ago
Selected Answer: A
A is rigth I thought wrong
upvoted 1 times
7 months ago
Selected Answer: C
I think VPN is not right solution for "low latency"
So how about C?
upvoted 2 times
6 months, 3 weeks ago
The question mention that "The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails" so VPN
as secondary option is acceptable
upvoted 2 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #69
A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are
in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The
company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?
A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect tra c. Use Aurora PostgreSQL Cross-
Region Replication.
B. Con gure the Auto Scaling group to use multiple Availability Zones. Con gure the database as Multi-AZ. Con gure an Amazon RDS Proxy
instance for the database.
C. Con gure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the
snapshots in the event of a failure.
D. Con gure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event
Noti cations to launch an AWS Lambda function to write the data to the database.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
RDS Proxy for Aurora https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 7 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: B
By configuring the Auto Scaling group to use multiple Availability Zones, the application will be able to continue running even if one Availability
Zone goes down. Configuring the database as Multi-AZ will also ensure that the database remains available in the event of a failure in one
Availability Zone. Using an Amazon RDS Proxy instance for the database will allow the application to automatically route traffic to healthy database
instances, further increasing the availability of the application. This solution will meet the requirements for high availability with minimal
operational effort.
upvoted 6 times
Most Recent
1 day, 12 hours ago
Selected Answer: B
B is correct answer
upvoted 1 times
1 week ago
Selected Answer: B
A. This approach provides geographic redundancy, it introduces additional complexity and operational effort, including managing replication,
handling latency, and potentially higher data transfer costs.
C. While snapshots can be used for data backup and recovery, they do not provide real-time failover capabilities and can result in significant data
loss if a failure occurs between snapshots.
D. While this approach offers some decoupling and scalability benefits, it adds complexity to the data flow and introduces additional overhead for
data processing.
In comparison, option B provides a simpler and more streamlined solution by utilizing multiple AZs, Multi-AZ configuration for the database, and
RDS Proxy for improved connection management. It ensures high availability, minimal downtime, and minimum loss of data with the least
operational effort.
upvoted 2 times
1 month, 1 week ago
@Wajif the reason why it's not A is because the question mentions High availability and nothing to do with region. You can achieve HA without
spanning multiple regions. Also B is incorrect because ALB are region specific and span across multiple AZ with that specific region (not cross
region)
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
RDS Proxy is fully managed by AWS for RDS/Aurora. It is auto-scaling and highly available by default.
Community vote distribution
B (93%)
7%
upvoted 1 times
6 months ago
Selected Answer: B
The correct solution is B: Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an
Amazon RDS Proxy instance for the database.
This solution will meet the requirements of high availability with minimum downtime and minimum loss of data with the least operational effort. By
configuring the Auto Scaling group to use multiple Availability Zones, the web application will be able to withstand the failure of one Availability
Zone without any disruption to the service. By configuring the database as Multi-AZ, the database will automatically failover to a standby instance
in a different Availability Zone in the event of a failure, ensuring minimal downtime. Additionally, using an RDS Proxy instance will help to improve
the performance and scalability of the database.
upvoted 3 times
6 months ago
Selected Answer: B
Aurora PostgreSQL DB clusters don't support Aurora Replicas in different AWS Regions
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Replication.html
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Answer is B
it will ensure that the database is highly available by replicating the data to a secondary instance in a different Availability Zone. In the event of a
failure, the secondary instance will automatically take over and continue servicing database requests without any data loss. Additionally,
configuring an Amazon RDS Proxy instance for the database will help improve the availability and scalability of the database
upvoted 4 times
7 months ago
Selected Answer: A
Why not A?
upvoted 2 times
6 months ago
Here is why Option A is not the correct solution:
Option A: Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL
Cross-Region Replication.
While this solution would provide high availability with minimum downtime, it would involve significant operational effort and may result in
data loss. Placing the EC2 instances in different Regions would require significant infrastructure changes and could impact the performance of
the application. Additionally, Aurora PostgreSQL Cross-Region Replication is designed to provide disaster recovery rather than high availability,
and it may result in some data loss during the replication process.
upvoted 3 times
7 months ago
maybe because of load balancer, diffrent region can't be answer.
upvoted 2 times
6 months, 4 weeks ago
"The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones". Why
not A?
upvoted 1 times
6 months, 3 weeks ago
They need to be in the same Region
upvoted 1 times
6 months, 3 weeks ago
The question states multiple regions not multiple Availability Zones, a big difference!
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 2 weeks ago
Important fact: EC2 Auto Scaling groups are regional constructs. They can span Availability Zones, but not AWS regions. So can't be D in case you
are between B and D
https://aws.amazon.com/tr/ec2/autoscaling/faqs/
upvoted 2 times
Topic 1
Question #70
A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is con gured to use an Amazon EC2 Auto Scaling
group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances
that run the web service. The company needs to improve the application's availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
A. Enable HTTP health checks on the NLB, supplying the URL of the company's application.
B. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected. the application will
restart.
C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application.
Con gure an Auto Scaling action to replace unhealthy instances.
D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Con gure an Auto Scaling action to
replace unhealthy instances when the alarm is in the ALARM state.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
I would choose A, as NLB supports HTTP and HTTPS Health Checks, BUT you can't put any URL (as proposed), only the node IP addresses.
So, the solution is C.
upvoted 19 times
7 months, 2 weeks ago
can you elaborate more pls
upvoted 2 times
5 months, 1 week ago
NLBs support HTTP, HTTPS and TCP health checks:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html (check HealthCheckProtocol)
But NLBs only accept either selecting EC2 instances or IP addresses directly as targets. You can't provide a URL to your endpoints, only a
health check path (if you're using HTTP or HTTPS health checks).
upvoted 5 times
2 months ago
What's the difference between endpoint URL and health check path?
upvoted 1 times
2 weeks, 6 days ago
A URL includes the hostname. The health check path is only the path portion. For example,
URL = https://i-0123456789abcdef.us-west-2.compute.internal/index.html
health check path= /index.html
upvoted 1 times
Highly Voted
8 months, 1 week ago
Selected Answer: C
Option C. NLB works at Layer 4 so it does not support HTTP/HTTPS. The replacement for the ALB is the best choice.
upvoted 8 times
5 months, 1 week ago
That's incorrect. NLB does support HTTP and HTTPS (and TCP) health checks.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
There just isn't an answer option that reflects that. My guess is that the question and/or answer options are outdated.
upvoted 3 times
Most Recent
1 week ago
Selected Answer: C
A. NLB, but NLB's health checks are designed for TCP/UDP protocols and lack the advanced features specific to HTTP applications provided by ALB.
B. This approach involves custom scripting and manual intervention, which contradicts the requirement of not writing custom scripts or code.
Community vote distribution
C (84%)
A (16%)
D. Since the NLB does not detect HTTP errors, relying solely on the UnhealthyHostCount metric may not accurately capture the health of the
application instances.
Therefore, C is the recommended choice for improving the application's availability without custom scripting or code. By replacing the NLB with an
ALB, enabling HTTP health checks, and configuring Auto Scaling to replace unhealthy instances, the company can ensure that only healthy
instances are serving traffic, enhancing the application's availability automatically.
upvoted 2 times
1 month, 1 week ago
Replace the NLB (layer 4 udp and tcp) with an Application Load Balancer - ALB (layer 7) supports http and https requests.
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
must be C
Application availability: NLB cannot assure the availability of the application. This is because it bases its decisions solely on network and TCP-layer
variables and has no awareness of the application at all. Generally, NLB determines availability based on the ability of a server to respond to ICMP
ping or to correctly complete the three-way TCP handshake. ALB goes much deeper and is capable of determining availability based on not only a
successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters.
upvoted 1 times
3 months, 1 week ago
Also A doesn't offer what bellow in C offers...
Configure an Auto Scaling action to replace unhealthy instances
upvoted 1 times
4 months, 3 weeks ago
Answer is C
A solution architect can use Amazon EC2 Auto Scaling health checks to automatically detect and replace unhealthy instances in the EC2 Auto
Scaling group. The health checks can be configured to check the HTTP errors returned by the application and terminate the unhealthy instances.
This will ensure that the application's availability is improved, without requiring custom scripts or code.
upvoted 1 times
4 months, 4 weeks ago
I will go with A as Network load balancer supports HTTP and HTTPS health checks, maybe the answer is outdated.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: C
https://medium.com/awesome-cloud/aws-difference-between-application-load-balancer-and-network-load-balancer-cb8b6cd296a4
As NLB does not support HTTP health checks, you can only use ALB to do so.
upvoted 1 times
5 months, 1 week ago
That's incorrect. NLB does support HTTP and HTTPS (and TCP) health checks.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
Just a general tip: Medium is not a reliable resource. Anyone can create content there. Rely only on official AWS documentation.
upvoted 2 times
6 months ago
Answer is C, and A is wrong because
In NLB, for HTTP or HTTPS health check requests, the host header contains the IP address of the load balancer node and the listener port, not the
IP address of the target and the health check port.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
upvoted 3 times
6 months ago
Selected Answer: C
Correct answer - C
Network load balancers (Layer 4) allow to:
• Forward TCP & UDP traffic to your instances
• Handle millions of request per seconds
• Less latency ~100 ms (vs 400 ms for ALB)
Best choice for HTTP traffic - replace to Application load balancer
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
The best option to meet the requirements is to enable HTTP health checks on the NLB by supplying the URL of the company's application. This will
allow the NLB to automatically detect HTTP errors and take action, such as marking the target instance as unhealthy and routing traffic away from
it.
Option A - Enable HTTP health checks on the NLB, supplying the URL of the company's application.
This is the correct solution as it allows the NLB to automatically detect HTTP errors and take action.
upvoted 4 times
1 week, 4 days ago
Option C right. A is is not necessarily wrong, but it may not be the most effective solution to meet the requirements in this scenario. Here's why:
Option A suggests enabling HTTP health checks on the Network Load Balancer (NLB) by supplying the URL of the company's application. While
this can help the NLB detect if the application is accessible or not, it does not directly address the specific requirement of automatically
restarting the EC2 instances when HTTP errors occur.
upvoted 1 times
6 months, 1 week ago
Option B - Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected, the
application will restart.
This option involves writing custom scripts or code, which is not allowed by the requirements. Additionally, this solution may not be reliable or
efficient, as it relies on checking the logs locally on each instance and may not catch all errors.
Option C - Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application.
Configure an Auto Scaling action to replace unhealthy instances.
While this option may improve the availability of the application, it is not necessary to replace the NLB with an Application Load Balancer in
order to enable HTTP health checks. The NLB can support HTTP health checks as well, and replacing it may involve additional effort and cost.
upvoted 3 times
6 months, 1 week ago
Option D - Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling
action to replace unhealthy instances when the alarm is in the ALARM state.
This option involves monitoring the UnhealthyHostCount metric, which only reflects the number of unhealthy targets that the NLB is
currently routing traffic away from. It does not directly monitor the health of the application or detects HTTP errors. Additionally, this
solution may not be sufficient to detect and respond to HTTP errors in a timely manner.
upvoted 1 times
2 months, 4 weeks ago
This won't increase availability when instances become unavailable.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A is very much a valid option as Autoscaling group can be configured to remove EC2 instances that fails http health check of NLB. AWS NLB
supports http based health check.
upvoted 1 times
7 months ago
Selected Answer: A
A is the best option.
NLB support http healthcheck, so why do we need to move to ALB ?
moreover the sentence "Configure an Auto Scaling action to replace unhealthy instances" in C seems to be wrong, as auto scaling remove any
unhealthy instance by default, you do not need to configure it.
upvoted 1 times
6 months, 3 weeks ago
I would say A will not give you what you want. "If you add a TLS listener to your Network Load Balancer, we perform a listener connectivity test."
(https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) So a check will be made to see that
something is listening on port 443. What it will not check is the status of the application e.g. HTTP 200 OK. Now the Application Load Balancer
HTTP health check using the URL of the company's application, will do this, so C is the correct answer.
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
C is the correct!
NLB does not handle HTTP (layer 7) listerns errors only TCP (layer 4) listeners.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-nlb.html
upvoted 4 times
7 months, 3 weeks ago
Answer is A
NLB is ideal for TPC and UDP Traffic and checks operating in layer 4.
ALB- Supports HTTP and HTTPs traffics. Hence the ELB needs to be changed from NLB to ALB.
upvoted 1 times
8 months, 1 week ago
Selected Answer: A
NLB supports HTTP health checks, they are part of the target group and the setting is the same for ALB and NLB HTTP/HTTPS health checks.
upvoted 1 times
7 months ago
A is incorrect. NLB cannot detect http errors. Adding health check only detects the healthiness of the instances, not http errors.
upvoted 2 times
8 months ago
"The company needs to improve the application's availability"
Answer A does not address this. The auto scaling group in answer C does.
upvoted 1 times
7 months, 4 weeks ago
NLB is already configured with a target group supported by EC2 ASG "NLB's target group is configured to use an Amazon EC2 Auto Scaling
group". NLB need to be configured to use http health check. Hence A
upvoted 2 times
6 months, 2 weeks ago
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-nlb.html
Note
Unlike a Classic Load Balancer or an Application Load Balancer, a Network Load Balancer can't have application layer (layer 7) HTTP or
HTTPS listeners. It only supports transport layer (layer 4) TCP listeners. HTTP and HTTPS traffic can be routed to your environment over
TCP.
upvoted 1 times
Topic 1
Question #71
A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions
architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?
A. Con gure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region.
B. Con gure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB.
D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the
DynamoDB table by using the EBS snapshot.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
A - DynamoDB global tables provides multi-Region, and multi-active database, but it not valid "in case of data corruption". In this case, you need a
backup. This solutions isn't valid.
**B** - Point in Time Recovery is designed as a continuous backup juts to recover it fast. It covers perfectly the RPO, and probably the RTO.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html
C - A daily export will not cover the RPO of 15min.
D - DynamoDB is serverless... so what are these EBS snapshots taken from???
upvoted 31 times
5 months ago
Yes, it is possible to take EBS snapshots of a DynamoDB table. The process for doing this involves the following steps:
Create a new Amazon Elastic Block Store (EBS) volume from the DynamoDB table.
Stop the DynamoDB service on the instance.
Detach the EBS volume from the instance.
Create a snapshot of the EBS volume.
Reattach the EBS volume to the instance.
Start the DynamoDB service on the instance.
You can also use AWS Data pipeline to automate the above process and schedule regular snapshots of your DynamoDB table.
Note that, if your table is large and you want to take a snapshot of it, it could take a long time and consume a lot of bandwidth, so it's
recommended to use the Global Tables feature from DynamoDB in order to have a Multi-region and Multi-master DynamoDB table, and you
can snapshot each region separately.
upvoted 1 times
2 months, 3 weeks ago
What is "DynamoDB service on the instance" ?
upvoted 1 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
The best solution to meet the RPO and RTO requirements would be to use DynamoDB point-in-time recovery (PITR). This feature allows you to
restore your DynamoDB table to any point in time within the last 35 days, with a granularity of seconds. To recover data within a 15-minute RPO,
you would simply restore the table to the desired point in time within the last 35 days.
To meet the RTO requirement of 1 hour, you can use the DynamoDB console, AWS CLI, or the AWS SDKs to enable PITR on your table. Once
enabled, PITR continuously captures point-in-time copies of your table data in an S3 bucket. You can then use these point-in-time copies to restore
your table to any point in time within the retention period.
***CORRECT***
Option B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
upvoted 5 times
6 months, 1 week ago
Community vote distribution
B (100%)
***WRONG***
Option A (configuring DynamoDB global tables) would not meet the RPO requirement, as global tables are designed to replicate data to
multiple regions for high availability, but they do not provide a way to restore data to a specific point in time.
Option C (exporting data to S3 Glacier) would not meet the RPO or RTO requirements, as S3 Glacier is a cold storage service with a retrieval time
of several hours.
Option D (scheduling EBS snapshots) would not meet the RPO requirement, as EBS snapshots are taken on a schedule, rather than continuously.
Additionally, restoring a DynamoDB table from an EBS snapshot can take longer than 1 hour, so it would not meet the RTO requirement.
upvoted 3 times
Most Recent
1 week ago
A. Global tables provide multi-region replication for disaster recovery purposes, they may not meet the desired RPO of 15 minutes without
additional configuration and potential data loss.
C. Exporting and importing data on a daily basis does not align with the desired RPO of 15 minutes.
D. EBS snapshots can be used for data backup, they are not directly applicable to DynamoDB and cannot provide the desired RPO and RTO without
custom implementation.
In comparison, option B utilizing DynamoDB's built-in point-in-time recovery functionality provides the most straightforward and effective solution
for meeting the specified RPO of 15 minutes and RTO of 1 hour. By enabling PITR and restoring the table to the desired point in time, the company
can recover the customer information with minimal data loss and within the required time frame.
upvoted 2 times
1 month, 1 week ago
The answer is in the question. Read the question again!!! Option B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the
desired point in time.
upvoted 1 times
2 months ago
If there is anyone who is willing to share his/her contributor access, then please write to vinaychethi99@gmail.com
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
B is correct
DynamoDB point-in-time recovery allows the solutions architect to recover the DynamoDB table to a specific point in time, which would meet the
RPO of 15 minutes. This feature also provides an RTO of 1 hour, which is the desired recovery time objective for the application. Additionally,
configuring DynamoDB point-in-time recovery does not require any additional infrastructure or operational effort, making it the best solution for
this scenario.
Option D is not correct because scheduling Amazon EBS snapshots for the DynamoDB table every 15 minutes would not meet the RPO or RTO
requirements. While EBS snapshots can be used to recover data from a DynamoDB table, they are not designed to provide real-time data
protection or recovery capabilities
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
8 months ago
Selected Answer: B
B is the answer
upvoted 1 times
8 months, 1 week ago
Selected Answer: B
I think DynamoDB global tables also work here, but Point in Time Recovery is a better choice
upvoted 1 times
8 months, 1 week ago
I THINK B.
https://dynobase.dev/dynamodb-point-in-time-recovery/
upvoted 1 times
8 months, 2 weeks ago
answer is D
upvoted 1 times
8 months, 1 week ago
bhk gandu chutiye glt ans btata hai
upvoted 1 times
7 months, 3 weeks ago
Try communicate in English for audience
upvoted 4 times
8 months, 2 weeks ago
DynamoDB is serverless, so no storage snapshots available. https://aws.amazon.com/dynamodb/
upvoted 2 times
Topic 1
Question #72
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located
in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce
these costs.
How can the solutions architect meet this requirement?
A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.
B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.
C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.
D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: D
To reduce costs get rid of NAT Gateway , VPC endpoint to S3
upvoted 21 times
Highly Voted
6 months, 1 week ago
Selected Answer: D
***CORRECT***
The correct answer is Option D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3
buckets.
By deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC, eliminating
the need for data transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the application. The
endpoint policy can be used to specify which S3 buckets the application has access to.
upvoted 14 times
6 months, 1 week ago
***WRONG***
Option A, deploying Amazon API Gateway into a public subnet and adjusting the route table, would not address the issue of data transfer fees
as the application would still be transferring data over the internet.
Option B, deploying a NAT gateway into a public subnet and attaching an endpoint policy, would not address the issue of data transfer fees
either as the NAT gateway is used to enable outbound internet access for instances in a private subnet, rather than for connecting to S3.
Option C, deploying the application into a public subnet and allowing it to route through an internet gateway, would not reduce data transfer
fees as the application would still be transferring data over the internet.
upvoted 5 times
Most Recent
1 week ago
Selected Answer: D
A. API Gateway can serve as a proxy for S3 requests, it adds unnecessary complexity and additional costs compared to a direct VPC endpoint.
B. Using a NAT gateway for accessing S3 introduces unnecessary data transfer costs as traffic would still flow over the internet.
C. This approach would incur data transfer fees as the traffic would go through the public internet.
In comparison, option D using an S3 VPC gateway endpoint provides a direct and cost-effective solution for accessing S3 buckets within the same
Region. By keeping the data transfer within the AWS network infrastructure, it helps reduce data transfer fees and provides secure access to the S3
resources.
upvoted 2 times
3 weeks, 1 day ago
Selected Answer: D
Option D is correct answer.
upvoted 1 times
4 months, 4 weeks ago
To answer this question, I need to know the comparison of the types of gateway of costs, please give me a tip about that issue.
upvoted 1 times
6 months, 1 week ago
Community vote distribution
D (100%)
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
The answer is D:- Actually, the Application (EC2) is running in the same region...instead of going to the internet, data can be copied through the
VPC endpoint...so there will be no cost because data is not leaving the AWS infra
upvoted 1 times
6 months, 3 weeks ago
Can somebody please explain this question? Are we assuming the application is running in AWS and that adding the gateway endpoint avoids the
need for the EC2 instance to access the internet and thus avoid costs? Thanks a lot.
upvoted 2 times
6 months, 3 weeks ago
Yes correct
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
8 months, 1 week ago
Selected Answer: D
FYI :
-There is no additional charge for using gateway endpoints.
-Interface endpoints are priced at ~ $0.01/per AZ/per hour. Cost depends on the Region
- S3 Interface Endpoints resolve to private VPC IP addresses and are routable from outside the VPC (e.g via VPN, Direct Connect, Transit Gateway,
etc). S3 Gateway Endpoints use public IP ranges and are only routable from resources within the VPC.
upvoted 5 times
8 months, 2 weeks ago
Selected Answer: D
Close question to the Question #4, with same solution.
upvoted 3 times
Topic 1
Question #73
A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on
an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the
company's internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security
groups of all the EC2 instances will allow that access.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances.
B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.
D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of
the bastion host.
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of
the bastion host.
Correct Answer:
CD
Highly Voted
8 months, 1 week ago
Selected Answer: CD
C because from on-prem network to bastion through internet (using on-prem resource's public IP),
D because bastion and ec2 is in same VPC, meaning bastion can communicate to EC2 via it's private IP address
upvoted 27 times
Most Recent
1 week ago
Selected Answer: CD
C. This will restrict access to the bastion host from the specific IP range of the on-premises network, ensuring secure connectivity. This step ensures
that only authorized users from the on-premises network can access the bastion host.
D. This step enables SSH connectivity from the bastion host to the application instances in the private subnet. By allowing inbound SSH access only
from the private IP address of the bastion host, you ensure that SSH access is restricted to the bastion host only.
upvoted 2 times
1 month, 3 weeks ago
the internal and external IP range is not clear
upvoted 2 times
2 months ago
The private/public IP address thing is confusing. Ideally, the private instances inbound rule would just allow traffic from the security group of the
bastion host.
upvoted 2 times
4 months ago
Why external and not internal?
upvoted 1 times
3 months, 3 weeks ago
Because the traffic goes through the public internet. In the public internet, public IP(external IP) is used.
upvoted 4 times
4 months, 1 week ago
Selected Answer: CE
Application is in private subnet
Bastion Host is in public subnet
D does not make sense because the bastion host is in public subnet and they don't have a private IP but only a public IP address attached to them.
The IP wanting to connect is Public as well.
Bastion host in public subnet allows external IP (via internet) of the company to access it. Which than leaves us to give permission to the
application private subnet and for that the private subnet with the application accepts the IP coming from Bastion Host by changing its SG. C&E
upvoted 1 times
4 months ago
Community vote distribution
CD (92%)
8%
Bastion host in public subnet because it has a public IP and a NAT Gateway that can route traffic out of your AWS VPC but it does have the
ability to access the private subnet using private IP since it's not leaving AWS to access the private subnet. So C&D are the right answers.
upvoted 1 times
5 months, 2 weeks ago
I dont understand why not CE . Because question ask through internet connection to servers and bostion host.I understand they want to access
both of from publıc. I mean not from the servers to bastion host.
upvoted 2 times
6 months ago
Selected Answer: CD
https://www.examtopics.com/discussions/amazon/view/51356-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
6 months, 1 week ago
Selected Answer: CE
To meet the requirements, the solutions architect should take the following steps:
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company. This
will allow the solutions architect to connect to the bastion host from the company's on-premises network through the internet connection.
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the
bastion host. This will allow the solutions architect to connect to the application instances through the bastion host using SSH.
Note: It's important to ensure that the security groups for the bastion host and application instances are configured correctly to allow the desired
inbound traffic, while still protecting the instances from unwanted access.
upvoted 2 times
6 months, 1 week ago
***WRONG***
Here is why the other options are not correct:
A. Replacing the current security group of the bastion host with one that only allows inbound access from the application instances would not
allow the solutions architect to connect to the bastion host from the company's on-premises network through the internet connection. The
bastion host needs to be accessible from the external network in order to allow the solutions architect to connect to it.
B. Replacing the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company
would not allow the solutions architect to connect to the bastion host from the company's on-premises network through the internet
connection. The internal IP range is not accessible from the external network.
upvoted 1 times
6 months, 1 week ago
D. Replacing the current security group of the application instances with one that allows inbound SSH access from only the private IP address
of the bastion host would not allow the solutions architect to connect to the application instances through the bastion host using SSH. The
private IP address of the bastion host is not accessible from the external network, so the solutions architect would not be able to connect to
it from the on-premises network.
upvoted 1 times
6 months, 1 week ago
Selected Answer: CD
C and D
upvoted 1 times
7 months, 1 week ago
C and D
upvoted 1 times
8 months ago
CD is Ok.
upvoted 1 times
8 months, 1 week ago
why C? External?
upvoted 2 times
6 months, 3 weeks ago
Because the IP address exposed to the Bastian host will be the external not the internal IP address. Google "whats my ip" and you will see your
IP address on the internet is NOT your internal IP.
upvoted 3 times
8 months, 1 week ago
Selected Answer: CD
Option C (allow access just from the external IP) and D (allow inbound SSH from the private IP of the bastion host).
upvoted 2 times
8 months, 2 weeks ago
Selected Answer: CD
CD is correct
upvoted 2 times
Topic 1
Question #74
A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public
subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the
company.
How should security groups be con gured in this situation? (Choose two.)
A. Con gure the security group for the web tier to allow inbound tra c on port 443 from 0.0.0.0/0.
B. Con gure the security group for the web tier to allow outbound tra c on port 443 from 0.0.0.0/0.
C. Con gure the security group for the database tier to allow inbound tra c on port 1433 from the security group for the web tier.
D. Con gure the security group for the database tier to allow outbound tra c on ports 443 and 1433 to the security group for the web tier.
E. Con gure the security group for the database tier to allow inbound tra c on ports 443 and 1433 from the security group for the web tier.
Correct Answer:
AC
Highly Voted
7 months, 3 weeks ago
Selected Answer: AC
Web Server Rules: Inbound traffic from 443 (HTTPS) Source 0.0.0.0/0 - Allows inbound HTTPS access from any IPv4 address
Database Rules : 1433 (MS SQL)The default port to access a Microsoft SQL Server database, for example, on an Amazon RDS instance
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html
upvoted 15 times
Highly Voted
8 months, 1 week ago
Selected Answer: AC
EC2 web on public subnets + EC2 SQL on private subnet + security is high priority. So, Option A to allow HTTPS from everywhere. Plus option C to
allow SQL connection from the web instance.
upvoted 13 times
Most Recent
1 week ago
Selected Answer: AC
A. This configuration allows external users to access the web tier over HTTPS (port 443). However, it's important to note that it is generally
recommended to restrict the source IP range to a more specific range rather than allowing access from 0.0.0.0/0 (anywhere). This would limit access
to only trusted sources.
C. By allowing inbound traffic on port 1433 (default port for Microsoft SQL Server) from the security group associated with the web tier, you ensure
that the database tier can only be accessed by the EC2 instances in the web tier. This provides a level of isolation and restricts direct access to the
database tier from external sources.
upvoted 2 times
1 month, 1 week ago
DB tier: Port 1433 is the known standard for SQL server and should be used.
web tier on port 443 (HTTPS)
upvoted 2 times
1 month, 1 week ago
Selected Answer: AC
AC is correct
upvoted 1 times
4 months ago
A & C are the correct answer.
Inbound traffic to the web tier on port 443 (HTTPS)
The web tier will then access the Database tier on port 1433 - inbound.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: AC
AC 443-http inbound and 1433-sql server
Security group => focus on inbound traffic since by default outboud traffic is allowed
upvoted 2 times
Community vote distribution
AC (97%)
5 months, 3 weeks ago
Selected Answer: AC
Security group => focus on inbound traffic since by default outboud traffic is allowed
upvoted 2 times
6 months, 1 week ago
why both are inbound rules
upvoted 1 times
3 months ago
Because security groups are stateful.
upvoted 1 times
6 months, 1 week ago
Selected Answer: CE
***CORRECT***
The correct answers are C and E.
For security purposes, it is best practice to limit inbound and outbound traffic as much as possible. In this case, the web tier should only be able to
access the database tier and not the other way around. Therefore, the security group for the web tier should only allow outbound traffic to the
security group for the database tier on the necessary ports. Similarly, the security group for the database tier should only allow inbound traffic from
the security group for the web tier on the necessary ports.
Answer C: Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier. This is
correct because the web tier needs to be able to connect to the database on port 1433 in order to access the data.
upvoted 1 times
6 months ago
This is WRONG. Browse to a website and type :443 at the end of it. IT will translate to HTTPS. HTTPS = 443.
answers are A and C
upvoted 3 times
6 months, 1 week ago
Answer E: Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web
tier. This is correct because the web tier needs to be able to connect to the database on both port 443 and 1433 in order to access the data.
***WRONG***
Answer A: Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. This is not correct because the web
tier should not allow inbound traffic from the internet. Instead, it should only allow outbound traffic to the security group for the database tier.
upvoted 1 times
6 months, 1 week ago
***WRONG***
Answer B: Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0. This is not correct because the
web tier should not allow outbound traffic to the internet. Instead, it should only allow outbound traffic to the security group for the
database tier.
Answer D: Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the
web tier. This is not correct because the database tier should not allow outbound traffic to the web tier. Instead, it should only allow inbound
traffic from the security group for the web tier on the necessary ports.
upvoted 1 times
5 months, 2 weeks ago
Chatgpt is unreliable this answer from same.
upvoted 1 times
6 months, 1 week ago
Selected Answer: AC
A and C
upvoted 1 times
7 months, 1 week ago
A and C
upvoted 1 times
8 months ago
Agree with AC.
upvoted 2 times
8 months, 2 weeks ago
Very good questions
upvoted 3 times
Topic 1
Question #75
A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application
consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes
overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally e cient?
A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service
(Amazon SQS) as the communication layer between application services.
B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the servers' peak utilization during the
performance failures. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements.
C. Use Amazon Simple Noti cation Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an
Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto
Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
Correct Answer:
A
Highly Voted
8 months ago
Agree with A>>> Lambda = serverless + autoscale (modernize), SQS= decouple (no more drops)
upvoted 18 times
Highly Voted
4 months, 4 weeks ago
Selected Answer: A
The catch phrase is "scale up when communication failures are detected" Scaling should not be based on communication failures, that'll be crying
over spilled milk ! or rather too late. So D is wrong.
upvoted 9 times
4 months, 3 weeks ago
it says "one tier becomes overloaded" , Not communication failure...
upvoted 2 times
4 months, 3 weeks ago
D says: "Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected".
upvoted 3 times
Most Recent
1 week ago
Selected Answer: D
This solution addresses the issue of dropped transactions by decoupling the communication between application tiers using SQS. It ensures that
transactions are not lost even if one tier becomes overloaded.
By using EC2 in ASG, the application can automatically scale based on the demand and the length of the SQS. This allows for efficient utilization of
resources and ensures that the application can handle increased workload and communication failures.
CloudWatch is used to monitor the length of SQS. When queue length exceeds a certain threshold, indicating potential communication failures, the
ASG can be configured to scale up by adding more instances to handle the load.
D. This solution utilizes Lambda and API Gateway, which can be a valid approach for building serverless applications. However, it may introduce
additional complexity and operational overhead compared to the requirement of modernizing an existing multi-tiered application.
upvoted 2 times
2 months ago
ANS: A Key word - RESTful services - Amazon API Gateway
upvoted 3 times
2 months, 2 weeks ago
Must be D :
Please refer to thread https://pupuweb.com/aws-saa-c02-actual-exam-question-answer-dumps-3/6/
upvoted 1 times
2 months, 3 weeks ago
@Buruguduystunstugudunstuy Kindly share your comments for this question
upvoted 1 times
Community vote distribution
A (82%)
D (18%)
4 months, 3 weeks ago
Selected Answer: D
Must be D.
A is incorrect. Even though lambda could auto scale, SQS communication between tires is not addressing drop in transaction per se as SQS would
allow to read (say serially with FIFO or NOT) in a controlled way, your application code determines how many threads are being spanned to process
those messages. This could still overload the tier.
upvoted 4 times
5 months, 1 week ago
D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto
Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected. This solution
allows for horizontal scaling of the application servers and allows for automatic scaling based on communication failures, which can help prevent
transactions from being dropped when one tier becomes overloaded. It also provides a more modern and operationally efficient way to handle
communication between application services compared to traditional RESTful services.
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: A
Can be A only. Other 3 answers use CloudWatch, which does not make sense for this question.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: A
Server less and de couple.
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: A
Serverless (Lambda) + Decouple (SQS) is a modernized application.
The flow looks like this: API Gateway --> SQS (act as decouple) -> Lambda functions (act as subscriber pull msg from the queue to process)
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
EC2 is not modern...
upvoted 1 times
5 months, 3 weeks ago
lmao...
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
https://serverlessland.com/patterns/apigw-http-sqs-lambda-sls
upvoted 3 times
8 months, 1 week ago
Selected Answer: A
Serverless + decouple
upvoted 3 times
8 months, 1 week ago
Selected Answer: A
A
가
올바른
정답이다
upvoted 3 times
Topic 1
Question #76
A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON les
stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon
S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because
the data is considered sensitive.
Which solution offers the MOST reliable data transfer?
A. AWS DataSync over public internet
B. AWS DataSync over AWS Direct Connect
C. AWS Database Migration Service (AWS DMS) over public internet
D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
DMS is for databases and here refers to “JSON files”. Public internet is not reliable. So best option is B.
upvoted 22 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
***CORRECT***
The most reliable solution for transferring the data in a secure manner would be option B: AWS DataSync over AWS Direct Connect.
AWS DataSync is a data transfer service that uses network optimization techniques to transfer data efficiently and securely between on-premises
storage systems and Amazon S3 or other storage targets. When used over AWS Direct Connect, DataSync can provide a dedicated and secure
network connection between your on-premises data center and AWS. This can help to ensure a more reliable and secure data transfer compared to
using the public internet.
upvoted 7 times
6 months, 1 week ago
***WRONG***
Option A, AWS DataSync over the public internet, is not as reliable as using Direct Connect, as it can be subject to potential network issues or
congestion.
Option C, AWS Database Migration Service (DMS) over the public internet, is not a suitable solution for transferring large amounts of data, as it
is designed for migrating databases rather than transferring large amounts of data from a storage area network (SAN).
Option D, AWS DMS over AWS Direct Connect, is also not a suitable solution, as it is designed for migrating databases and may not be efficient
for transferring large amounts of data from a SAN.
upvoted 6 times
5 months ago
explanation about D option is good
upvoted 1 times
Most Recent
1 week ago
Selected Answer: B
DataSync is a service specifically designed for data transfer and synchronization between on-premises storage systems and AWS storage services
like S3. It provides reliable and efficient data transfer capabilities, ensuring the secure movement of large volumes of data.
By leveraging Direct Connect, which establishes a dedicated network connection between the on-premises data center and AWS, the data transfer
is conducted over a private and dedicated network link. This approach offers increased reliability, lower latency, and consistent network
performance compared to transferring data over the public internet.
Database Migration Service is primarily focused on database migration and replication, and it may not be the most appropriate tool for general-
purpose data transfer like JSON files.
Transferring data over the public internet may introduce potential security risks and performance variability due to factors like network congestion,
latency, and potential interruptions.
upvoted 2 times
1 month, 1 week ago
Best option and correct is B
Community vote distribution
B (100%)
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
as Ariel suggested and rightly so.....DMS is for databases and here refers to “JSON files”. Public internet is not reliable. so B
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Option B. DMS is not needed as there is no Database migration requirement.
upvoted 1 times
7 months ago
Selected Answer: B
Public internet options automatically out being best-effort. DMS is for database migration service and here they have to just transfer the data to
S3. Hence B.
upvoted 2 times
7 months, 1 week ago
B is correct
upvoted 1 times
8 months, 1 week ago
B
- A SAN presents storage devices to a host such that the storage appears to be locally attached. ( NFS is, or can be, a SAN -
https://serverfault.com/questions/499185/is-san-storage-better-than-nfs )
- AWS Direct Connect does not encrypt your traffic that is in transit by default. But the connection is private
(https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html)
upvoted 4 times
Topic 1
Question #77
A company needs to con gure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms
data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data
Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the
Kinesis Data Firehose delivery stream to send the data to Amazon S3.
B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use
AWS Glue to transform the data and to send the data to Amazon S3.
C. Con gure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery
stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose
delivery stream to send the data to Amazon S3.
D. Con gure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send
the data to Amazon S3.
Correct Answer:
C
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
(A) - You don't need to deploy an EC2 instance to host an API - Operational overhead
(B) - Same as A
(**C**) - Is the answer
(D) - AWS Glue gets data from S3, not from API GW. AWS Glue could do ETL by itself, so don't need lambda. Non sense.
https://aws.amazon.com/glue/
upvoted 31 times
1 month, 2 weeks ago
I don''t understand is why we should use Lambda in between to transform data. To me, Kinesis data firehose is enough as it is an extract,
transform, and load (ETL) service.
upvoted 1 times
Most Recent
1 week ago
Selected Answer: C
C. By leveraging these services together, you can achieve a real-time data ingestion architecture with minimal operational overhead. The data flows
from the API Gateway to the Kinesis data stream, undergoes transformations with Lambda, and is then sent to S3 via the Kinesis Data Firehose
delivery stream for storage.
A. This adds operational overhead as you need to handle EC2 management, scaling, and maintenance. It is less efficient compared to using a
serverless solution like API Gateway.
B. It requires deploying and managing an EC2 to host the API and configuring Glue. This adds operational overhead, including EC2 management
and potential scalability limitations.
D. It still requires managing and configuring Glue, which adds operational overhead. Additionally, it may not be the most efficient solution as Glue
is primarily used for ETL scenarios, and in this case, real-time data transformation is required.
upvoted 2 times
1 month, 1 week ago
Selected Answer: D
I am gonna choose D for this.
Kinesis Data Stream + Data Firehose will adds up to the operational overhead, plus it is "Near real-time", not a real time solution.
Lambda functions scale automatically, so no management of scaling/compute resources is needed.
AWS Glue handles the data storage in S3, so no management of that is needed.
upvoted 1 times
3 months, 1 week ago
Gotta love all those chatgpt answers y'all are throwing at us.
Kinesis Firehose is NEAR real-time, not real-time like your bots tell you.
upvoted 2 times
Community vote distribution
C (98%)
5 months, 1 week ago
Selected Answer: C
option C is the best solution. It uses Amazon Kinesis Data Firehose which is a fully managed service for delivering real-time streaming data to
destinations such as Amazon S3. This service requires less operational overhead as compared to option A, B, and D. Additionally, it also uses
Amazon API Gateway which is a fully managed service for creating, deploying, and managing APIs. These services help in reducing the operational
overhead and automating the data ingestion process.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C is the solution that meets the requirements with the least operational overhead.
In Option C, you can use Amazon API Gateway as a fully managed service to create, publish, maintain, monitor, and secure APIs. This means that
you don't have to worry about the operational overhead of deploying and maintaining an EC2 instance to host the API.
Option C also uses Amazon Kinesis Data Firehose, which is a fully managed service for delivering real-time streaming data to destinations such as
Amazon S3. With Kinesis Data Firehose, you don't have to worry about the operational overhead of setting up and maintaining a data ingestion
infrastructure.
upvoted 1 times
6 months, 1 week ago
Finally, Option C uses AWS Lambda, which is a fully managed service for running code in response to events. With AWS Lambda, you don't have
to worry about the operational overhead of setting up and maintaining a server to run the data transformation code.
Overall, Option C provides a fully managed solution for real-time data ingestion with minimal operational overhead.
upvoted 2 times
6 months, 1 week ago
Option A is incorrect because it involves deploying an EC2 instance to host an API, which adds operational overhead in the form of
maintaining and securing the instance.
Option B is incorrect because it involves deploying an EC2 instance to host an API and disabling source/destination checking on the instance.
Disabling source/destination checking can make the instance vulnerable to attacks, which adds operational overhead in the form of securing
the instance.
upvoted 2 times
6 months, 1 week ago
Option D is incorrect because it involves using AWS Glue to send the data to Amazon S3, which adds operational overhead in the form of
maintaining and securing the AWS Glue infrastructure.
Overall, Option C is the best choice because it uses fully managed services for the API, data transformation, and data delivery, which
minimizes operational overhead.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
Option C
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
8 months ago
Selected Answer: C
C is correct answer
upvoted 2 times
Topic 1
Question #78
A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally e cient solution that meets these requirements?
A. Use DynamoDB point-in-time recovery to back up the table continuously.
B. Use AWS Backup to create backup schedules and retention policies for the table.
C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle
con guration for the S3 bucket.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Con gure the Lambda function to
back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle con guration for the S3 bucket.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Answer is B
"Amazon DynamoDB offers two types of backups: point-in-time recovery (PITR) and on-demand backups. (==> D is not the answer)
PITR is used to recover your table to any point in time in a rolling 35 day window, which is used to help customers mitigate accidental deletes or
writes to their tables from bad code, malicious access, or user error. (==> A isn't the answer)
On demand backups are designed for long-term archiving and retention, which is typically used to help customers meet compliance and regulatory
requirements.
This is the second of a series of two blog posts about using AWS Backup to set up scheduled on-demand backups for Amazon DynamoDB. Part 1
presents the steps to set up a scheduled backup for DynamoDB tables from the AWS Management Console." (==> Not the DynamoBD console
and C isn't the answer either)
https://aws.amazon.com/blogs/database/part-2-set-up-scheduled-backups-for-amazon-dynamodb-using-aws-backup/
upvoted 36 times
5 months, 1 week ago
I think the answer is C because of storage time.
upvoted 1 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
The most operationally efficient solution that meets these requirements would be to use option B, which is to use AWS Backup to create backup
schedules and retention policies for the table.
AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS resources. It allows
you to create backup policies and schedules to automatically back up your DynamoDB tables on a regular basis. You can also specify retention
policies to ensure that your backups are retained for the required period of time. This solution is fully automated and requires minimal
maintenance, making it the most operationally efficient option.
upvoted 6 times
6 months, 1 week ago
Option A, using DynamoDB point-in-time recovery, is also a viable option but it requires continuous backup, which may be more resource-
intensive and may incur higher costs compared to using AWS Backup.
Option C, creating an on-demand backup of the table and storing it in an S3 bucket, is also a viable option but it requires manual intervention
and does not provide the automation and scheduling capabilities of AWS Backup.
Option D, using Amazon EventBridge (CloudWatch Events) and a Lambda function to back up the table and store it in an S3 bucket, is also a
viable option but it requires more complex setup and maintenance compared to using AWS Backup.
upvoted 7 times
Most Recent
1 week ago
AWS Backup is a fully managed backup service that simplifies the process of creating and managing backups across various AWS services, including
DynamoDB. It allows you to define backup schedules and retention policies to automatically take backups and retain them for the desired duration.
By using AWS Backup, you can offload the operational overhead of managing backups to the service itself, ensuring that your data is protected and
retained according to the specified retention period.
This solution is more efficient compared to the other options because it provides a centralized and automated backup management approach
specifically designed for AWS services. It eliminates the need to manually configure and maintain backup processes, making it easier to ensure data
retention compliance without significant operational effort.
upvoted 2 times
1 week, 6 days ago
Community vote distribution
B (100%)
A
PITR is used to recover your table to any point in time in a rolling 35 day window, which is used to help customers mitigate accidental deletes or
writes to their tables from bad code, malicious access, or user error. (==> A is the answer)
upvoted 1 times
1 month ago
using AWS Backup cheaper than DynamoDB point-in-time recovery
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
With less overhead is AWS Backups:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/backuprestore_HowItWorksAWS.html
upvoted 1 times
3 months ago
Selected Answer: B
To retain data for 7 years in an Amazon DynamoDB table, you can use AWS Backup to create backup schedules and retention policies for the table.
You can also use DynamoDB point-in-time recovery to back up the table continuously.
upvoted 1 times
3 months, 1 week ago
Selected Answer: B
B = AWS backup
upvoted 1 times
5 months, 2 weeks ago
C is correct because we have to store data in s3 and an S3 Lifecycle configuration for the S3 bucket for 7 year.and its used on-demand backup of
the table by using the DynamoDB console because If you need to store backups of your data for longer than 35 days, you can use on-demand
backup. On-demand provides you a fully consistent snapshot of your table data and stay around forever (even after the table is deleted).
upvoted 2 times
3 months, 2 weeks ago
In AWSBackup Plan, you can choose 7year Retention with Daily, Weekly or Monly frequency. From operational perspective, I think B is correct.
upvoted 1 times
5 months, 1 week ago
I think you are correct
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
B. Use AWS Backup to create backup schedules and retention policies for the table.
AWS Backup is a fully managed service that makes it easy to centralize and automate the backup of data across AWS resources. It can be used to
create backup schedules and retention policies for DynamoDB tables, which will ensure that the data is retained for the desired period of 7 years.
This solution will provide the most operationally efficient method for meeting the requirements because it requires minimal effort to set up and
manage.
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
Option B AWS Backup
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
AWS Backup
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 2 times
7 months, 1 week ago
Selected Answer: B
We recommend you use AWS Backup to automatically delete the backups that you no longer need by configuring your lifecycle when you created
your backup plan.
https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html
upvoted 1 times
8 months ago
Selected Answer: B
B is clear
upvoted 2 times
Topic 1
Question #79
A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not
be used on most mornings. In the evenings, the read and write tra c will often be unpredictable. When tra c spikes occur, they will happen very
quickly.
What should a solutions architect recommend?
A. Create a DynamoDB table in on-demand capacity mode.
B. Create a DynamoDB table with a global secondary index.
C. Create a DynamoDB table with provisioned capacity and auto scaling.
D. Create a DynamoDB table in provisioned capacity mode, and con gure it as a global table.
Correct Answer:
A
Highly Voted
8 months ago
Selected Answer: A
On-demand mode is a good option if any of the following are true:
- You create new tables with unknown workloads.
- You have unpredictable application traffic.
- You prefer the ease of paying for only what you use.
upvoted 25 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
**A** - On demand is the answer -
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand
B - not related with the unpredictable traffic
C - provisioned capacity is recommended for known patterns. Not the case here.
D - same as C
upvoted 14 times
3 months, 3 weeks ago
Thanks. Your reference link perfectly supports the option "A". 100% correct
upvoted 1 times
Most Recent
1 week ago
Selected Answer: C
By choosing provisioned capacity, you can allocate a specific amount of read and write capacity units based on your expected usage during peak
times. This helps in cost optimization as you only pay for the provisioned capacity, which can be adjusted according to your anticipated traffic.
Enabling auto scaling allows DynamoDB to automatically adjust the provisioned capacity up or down based on the actual usage. This is beneficial
in handling quick traffic spikes without manual intervention and ensuring that the required capacity is available to handle increased load efficiently.
Auto scaling helps to optimize costs by dynamically adjusting the capacity to match the demand, avoiding overprovisioning during periods of low
usage.
A. Creating a DynamoDB table in on-demand capacity mode, may not be the most cost-effective solution in this scenario. On-demand capacity
mode charges you based on the actual usage of read and write requests, which can be beneficial for sporadic or unpredictable workloads.
However, it may not be the optimal choice if the table is not used on most mornings.
upvoted 3 times
1 month, 1 week ago
Selected Answer: A
Correct answer is A
- You create new tables with unknown workloads. - You have unpredictable application traffic. - You prefer the ease of paying for only what you
use.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
"On-demand" is a good option for applications that have unpredictable or sudden spikes, since it automatically provisions read/write capacity.
"Provisioned capacity" is suitable for applications with predictable usage.
upvoted 1 times
Community vote distribution
A (77%)
C (23%)
2 months ago
Selected Answer: A
Answer is A.
Provisioned capacity is best if you have relatively predictable application traffic, run applications whose traffic is consistent, and ramps up or down
gradually.
On-demand capacity mode is best when you have unknown workloads, unpredictable application traffic and also if you only want to pay exactly for
what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and
when under-provisioned capacity would impact the user experience.
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
For unpredictable cases there's no way you can provision something, as it cannot be predicted, so the answer is A
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
On-demand capacity mode allows a DynamoDB table to automatically scale up or down based on the traffic to the table. This means that capacity
will be allocated as needed and billing will be based on actual usage, providing flexibility in capacity while minimizing costs. This is an ideal choice
for a table that is not used on most mornings and has unpredictable traffic spikes in the evenings.
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
unpredictable application traffic meaning answer is on demand Capacity
"This means that provisioned capacity is probably best for you if you have relatively predictable application traffic, run applications whose traffic is
consistent, and ramps up or down gradually.
Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic and
also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose
traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience."
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: A
Use on-demand capacity mode: With on-demand capacity mode, DynamoDB automatically scales up and down to handle the traffic without
requiring any capacity planning. This way, the company only pays for the actual amount of read and write capacity used, with no minimums or
upfront costs.
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
A. This is because the traffic spikes have no set time as they can happen at any time, it being morning or evening
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
C. Create a DynamoDB table with provisioned capacity and auto scaling. This will allow the table to automatically scale its capacity based on usage
patterns, which will help to optimize costs by reducing the amount of unused capacity during low traffic times and ensuring that sufficient capacity
is available during traffic spikes.
upvoted 4 times
5 months, 1 week ago
Selected Answer: C
Use pattern is not unknown, it was well laid out in the question. I think C is the correct answer.
upvoted 4 times
5 months, 1 week ago
Selected Answer: A
I have a feeling that the need for cost-optimisation is a distractor, and that people will jump on "provisioned with auto-scaling" without considering
that provisioned capacity mode is not a good fit for the requirements. On-demand may end up cheaper as you avoid over- or underprovisioning
capacity (when using auto-scaling, you still need to define a min and max). You can later switch capacity mode once your usage pattern becomes
stable (if it ever does).
AWS say that on-demand capacity mode is a good fit for:
- Unpredictable workloads with sudden spikes (mentioned in the requirements)
- Frequently idle workloads (where the DB isn't used at all; The requirements say that it won't be used most mornings)
- Events with unknown traffic (which this is - traffic in the evenings is unpredictable)
Whereas provisioned capacity mode is used for:
- Predictable workloads
- Gradual ramps (no sudden spikes, as auto-scaling isn't instant and can cause traffic to get throttled)
- Events iwth known traffic
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 3 times
5 months, 1 week ago
Selected Answer: A
Initially I thought C but after reading comments and this page, I switch to A
Provisioned mode is a good option if any of the following are true:
You have predictable application traffic.
You run applications whose traffic is consistent or ramps gradually.
Here https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html it mentions for
provisioned
> You can forecast capacity requirements to control costs.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Provisioned capacity is less expensive, the question says the time usage starts in the evening, which means I can provision for that time and auto
scale up or down to address the usage spikes. I think this will be a better architecture than expensive "on-demand" architecture.
upvoted 1 times
1 week, 6 days ago
" In the evenings, the read and write traffic will often be unpredictable"..it says unpredictable. I also thought C is the answer but upon reading
carefully the sentence it says evenings you can't predict. So, the better option is A.
upvoted 2 times
5 months, 2 weeks ago
A, Please refer the following link
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 1 times
Topic 1
Question #80
A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A
solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI
is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt
EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?
A. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner's AWS account to use the key.
B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to allow
the MSP Partner's AWS account to use the key.
C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to trust a
new KMS key that is owned by the MSP Partner for encryption.
D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account, Encrypt the S3 bucket with a new KMS
key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner's AWS account.
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
Share the existing KMS key with the MSP external account because it has already been used to encrypt the AMI snapshot.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 14 times
Highly Voted
8 months ago
Selected Answer: B
If EBS snapshots are encrypted, then we need to share the same KMS key to partners to be able to access it. Read the note section in the link
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html
upvoted 5 times
Most Recent
1 week ago
Selected Answer: B
By modifying the launchPermission property of the AMI and sharing it with the MSP Partner's account only, the solutions architect restricts access
to the AMI and ensures that it is not publicly available.
Additionally, modifying the key policy to allow the MSP Partner's account to use KMS customer managed key used for encrypting the EBS
snapshots ensures that the MSP Partner has the necessary permissions to access and use the key for decryption.
upvoted 2 times
1 month, 1 week ago
CORRECTION to my last comment Option B is correct not A.
Explanation why..
Making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use it. Best practice
would be to share the AMI with the MSP Partner's AWS account then Modify launchPermission property of the AMI. This ensures that the AMI is
shared only with the MSP Partner and is encrypted with a key that they are authorised to use.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Option A, making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use it. Best
practice would be to share the AMI with the MSP Partner's AWS account then Modify launchPermission property of the AMI. This ensures that the
AMI is shared only with the MSP Partner and is encrypted with a key that they are authorised to use.
upvoted 1 times
3 months ago
It is Good but you Can also have a Gift Card and more information Here https://tinyurl.com/mr4ckeda
upvoted 1 times
3 months ago
Selected Answer: D
Option D
Community vote distribution
B (88%)
6%
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
***CORRECT***
B. Modify the launchPermission property of the AMI.
The most secure way for the solutions architect to share the AMI with the MSP Partner's AWS account would be to modify the launchPermission
property of the AMI and share it with the MSP Partner's AWS account only. The key policy should also be modified to allow the MSP Partner's AWS
account to use the key. This ensures that the AMI is only shared with the MSP Partner and is encrypted with a key that they are authorized to use.
upvoted 3 times
6 months, 1 week ago
Option A, making the AMI and snapshots publicly available, is not a secure option as it would allow anyone with access to the AMI to use it.
Option C, modifying the key policy to trust a new KMS key owned by the MSP Partner, is also not a secure option as it would involve sharing the
key with the MSP Partner, which could potentially compromise the security of the data encrypted with the key.
Option D, exporting the AMI to an S3 bucket in the MSP Partner's AWS account and encrypting the S3 bucket with a new KMS key owned by the
MSP Partner, is also not the most secure option as it involves sharing the AMI and a new key with the MSP Partner, which could potentially
compromise the security of the data.
upvoted 6 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
Must use and share the existing KMS key to decrypt the same key
upvoted 3 times
7 months, 3 weeks ago
Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 1 times
8 months, 1 week ago
Selected Answer: C
MOST secure way should be C
upvoted 2 times
8 months, 2 weeks ago
MOST secure way should be C, with a separate key, not the one already used.
upvoted 1 times
7 months, 2 weeks ago
Must use and share the existing KMS key to decrypt the same key
upvoted 1 times
8 months, 1 week ago
A seperate/new key is not possible because it won't be able to decrypt the AMI snapshot which was already encrypted with the existing/old key.
upvoted 9 times
8 months ago
This is truth
upvoted 2 times
Topic 1
Question #81
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while
adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The
solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch con guration that uses the AMI. Create an Auto Scaling group using the launch con guration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch con guration that uses the AMI. Create an Auto Scaling group using the launch con guration. Set the
scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the
processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling
policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
Correct Answer:
C
Highly Voted
6 months, 2 weeks ago
Selected Answer: C
decoupled = SQS
Launch template = AMI
Launch configuration = EC2
upvoted 17 times
Most Recent
1 week ago
Selected Answer: C
This design follows the best practices for loosely coupled and scalable architecture. By using SQS, the jobs are durably stored in the queue,
ensuring they are not lost. The processor application is stateless, which aligns with the design requirement. The AMI allows for consistent
deployment of the application. The launch template and ASG facilitate the dynamic scaling of the application based on the number of items in the
SQS, ensuring parallel processing of jobs.
Options A and D suggest using SNS, which is a publish/subscribe messaging service and may not provide the durability required for job storage.
Option B suggests using network usage as a scaling metric, which may not be directly related to the number of jobs to be processed. The number
of items in the SQS provides a more accurate metric for scaling based on the workload.
upvoted 3 times
1 month, 1 week ago
Selected Answer: C
C for sure
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
***CORRECT***
The correct design is Option C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI)
that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
This design satisfies the requirements of the application by using Amazon Simple Queue Service (SQS) as durable storage for the job items and
Amazon Elastic Compute Cloud (EC2) Auto Scaling to add and remove nodes based on the number of items in the queue. The processor
application can be run in parallel on multiple nodes, and the use of launch templates allows for flexibility in the configuration of the EC2 instances.
upvoted 4 times
6 months, 1 week ago
***WRONG***
Option A is incorrect because it uses Amazon Simple Notification Service (SNS) instead of SQS, which is not a durable storage solution.
Community vote distribution
C (100%)
Option B is incorrect because it uses CPU usage as the scaling trigger instead of the number of items in the queue.
Option D is incorrect for the same reasons as option A.
upvoted 4 times
6 months, 2 weeks ago
Selected Answer: C
SQS with EC2 autoscaling policy based number of messages in the queue.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
C is correct
upvoted 2 times
6 months, 3 weeks ago
what about the word "coupled"
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
AWS strongly recommends that you do not use launch configurations hence answer is C
https://docs.amazonaws.cn/en_us/autoscaling/ec2/userguide/launch-configurations.html
upvoted 3 times
7 months ago
Selected Answer: C
answer is C a there is nothing called " Launch Configuration" it's called "Launch Template" which is used by the autoscalling group to creat the new
instances.
upvoted 4 times
5 months, 2 weeks ago
There's launch configuration. Search
upvoted 3 times
7 months ago
I was not sure between Launch template and Launch configuration.
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
answer is c
upvoted 1 times
7 months, 3 weeks ago
https://www.examtopics.com/discussions/amazon/view/22139-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
8 months ago
It looks like C
upvoted 1 times
8 months ago
Correct Answer: C
upvoted 1 times
Topic 1
Question #82
A company hosts its web applications in the AWS Cloud. The company con gures Elastic Load Balancers to use certi cates that are imported into
AWS Certi cate Manager (ACM). The company's security team must be noti ed 30 days before the expiration of each certi cate.
What should a solutions architect recommend to meet this requirement?
A. Add a rule in ACM to publish a custom message to an Amazon Simple Noti cation Service (Amazon SNS) topic every day, beginning 30
days before any certi cate will expire.
B. Create an AWS Con g rule that checks for certi cates that will expire within 30 days. Con gure Amazon EventBridge (Amazon CloudWatch
Events) to invoke a custom alert by way of Amazon Simple Noti cation Service (Amazon SNS) when AWS Con g reports a noncompliant
resource.
C. Use AWS Trusted Advisor to check for certi cates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on
Trusted Advisor metrics for check status changes. Con gure the alarm to send a custom alert by way of Amazon Simple Noti cation Service
(Amazon SNS).
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certi cates that will expire within 30 days. Con gure the
rule to invoke an AWS Lambda function. Con gure the Lambda function to send a custom alert by way of Amazon Simple Noti cation Service
(Amazon SNS).
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
B
AWS Config has a managed rule
named acm-certificate-expiration-check
to check for expiring certificates
(configurable number of days)
upvoted 35 times
3 months, 2 weeks ago
Answer B and answer D are possible according to this article.
So, need to read B & D carefully to determine the most suitable answer.
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 4 times
8 months, 2 weeks ago
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 9 times
Highly Voted
8 months ago
Selected Answer: B
https://aws.amazon.com/premiumsupport/knowledge-center/acm-certificate-expiration/
upvoted 10 times
Most Recent
1 week, 4 days ago
Selected Answer: D
B is incorrect because aws config as inbuilt rule 'acm-certificate-expiration-check' and option is create a new rule. If inbuilt rule is there then why to
create a new one. This is why D is correct
upvoted 1 times
2 weeks ago
the ansver is a b.
upvoted 1 times
3 weeks, 2 days ago
I MARK for D. D meets requirements.
upvoted 1 times
1 month ago
Selected Answer: B
EventBridge alone cannot detect the certificate expiry. It should receives the event of the certification expiration either from ACM or AWS Config.
- for ACM it can be configured to send daily expiration events starting 45 days prior to expiration and the number of days can be configured.
Community vote distribution
D (52%)
B (48%)
- and for AWS Config has a managed rule named acm-certificate-expiration-check to check for expiring certificates (configurable number of days)
https://repost.aws/knowledge-center/acm-certificate-expiration
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
I go for D. D meets reqirements.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
The answer is D
upvoted 1 times
1 month, 2 weeks ago
ANSWER - D
Here are the steps to set this up:
Request or import the SSL/TLS certificate into ACM and attach it to the Elastic Load Balancers.
Create a new CloudWatch Events rule with a schedule expression that matches the certificate expiration date. For example, if the certificate expires
on May 31, 2023, the schedule expression should be: cron(0 0 31 4 ? 2023).
Choose a target for the CloudWatch Events rule, such as an Amazon SNS topic or an AWS Lambda function.
Configure the notification message to include information about the expiring certificate, such as its domain name, ACM certificate ID, and
expiration date.
Test the CloudWatch Events rule by simulating an expired certificate.
By using ACM to manage the certificates and CloudWatch Events to trigger the notifications, the company can ensure that the security team is
notified 30 days before the certificate expiration date, and can take appropriate actions to renew or replace the certificate in a timely manner.
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: D
https://aws.amazon.com/blogs/security/how-to-monitor-expirations-of-imported-certificates-in-aws-certificate-manager-acm/
upvoted 2 times
1 month, 4 weeks ago
Selected Answer: B
AWS Config has rule to check the certificate expiration, configure number of days to 30 we can attain the solution..
upvoted 1 times
1 month, 4 weeks ago
D- EventBridge alone can't check for certificate expiry. It should get info either from ACM (in case of ACM Issued certificate OR AWS Config (When
imported on ACM from outside)) So B is correct
upvoted 1 times
2 months ago
Selected Answer: D
Option D is the correct solution to meet the requirement. By creating an Amazon EventBridge rule, the solution can monitor the certificates hosted
on the Elastic Load Balancer for any that are about to expire within 30 days. When a certificate meets this criteria, the rule triggers an AWS Lambda
function that sends an email alert to the security team using Amazon SNS. This approach is the most efficient and targeted solution to meet the
requirement as it only notifies the security team when a certificate is about to expire, reducing unnecessary notifications.
upvoted 2 times
2 months, 2 weeks ago
The correct answer is option D: Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30
days. Configure the rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple
Notification Service (Amazon SNS).
Explanation:
To meet the requirement of notifying the security team 30 days before the expiration of each certificate, a solutions architect can use Amazon
EventBridge to schedule an event that will detect any certificates that will expire within 30 days. The event rule will then trigger an AWS Lambda
function that sends a notification to the security team using Amazon SNS. This approach provides an automated and scalable solution to monitor
and notify the team about certificate expiration.
upvoted 1 times
2 months, 3 weeks ago
D
From Stephane Maarek training course:
Option to generate the certificate
outside of ACM and then import it
• No automatic renewal, must import a
new certificate before expiry
• ACM sends daily expiration events
starting 45 days prior to expiration
• The # of days can be configured
• Events are appearing in EventBridge
• AWS Config has a managed rule
named acm-certificate-expiration-check
to check for expiring certificates
(configurable number of days)
upvoted 2 times
2 months, 3 weeks ago
B is the right ans
upvoted 1 times
1 month, 3 weeks ago
Please explain why B not D is the correct answer
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Option D is the best solution because it recommends using Amazon EventBridge to detect any certificates that will expire within 30 days. Amazon
EventBridge provides a simple and scalable way to capture and route events from AWS services and third-party SaaS applications. In this case, an
Amazon CloudWatch Events rule can be created to capture certificate expiration events, which will then trigger an AWS Lambda function. The
Lambda function can be configured to send a custom alert through Amazon SNS to the security team. This solution is efficient, scalable, and
addresses the requirement of notifying the security team 30 days before the certificate expiration.
upvoted 2 times
Topic 1
Question #83
A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it
wants to optimize site loading times for new European users. The site's backend must remain in the United States. The product is being launched
in a few days, and an immediate solution is needed.
What should the solutions architect recommend?
A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
B. Move the website to Amazon S3. Use Cross-Region Replication between Regions.
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
D. Use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers.
Correct Answer:
C
Highly Voted
6 months, 1 week ago
Selected Answer: C
***CORRECT***
C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML, CSS,
JavaScript, images, and videos. By using CloudFront, the company can distribute the content of their website from edge locations that are closer to
the users in Europe, reducing the loading times for these users.
To use CloudFront, the company can set up a custom origin pointing to their on-premises servers in the United States. CloudFront will then cache
the content of the website at edge locations around the world and serve the content to users from the location that is closest to them. This will
allow the company to optimize the loading times for their European users without having to move the backend of the website to a different region.
upvoted 13 times
3 months, 3 weeks ago
good explanation..thanks
upvoted 1 times
6 months, 1 week ago
***WRONG***
Option A (launch an Amazon EC2 instance in us-east-1 and migrate the site to it) would not address the issue of optimizing loading times for
European users.
Option B (move the website to Amazon S3 and use Cross-Region Replication between Regions) would not be an immediate solution as it would
require time to set up and migrate the website.
Option D (use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers) would not be suitable because it would not
improve the loading times for users in Europe.
upvoted 6 times
Most Recent
1 week ago
Selected Answer: C
C. This solution leverages the global network of CloudFront edge locations to cache and serve the website's static content from the edge locations
closest to the European users.
A. Hosting the website in a single region would still result in increased latency for European users accessing the site.
B. Moving the website to S3 and implementing Cross-Region Replication would distribute the website's content across multiple regions, including
Europe. S3 is primarily used for static content hosting, and it does not provide server-side processing capabilities necessary for dynamic website
functionality.
D. Using a geoproximity routing policy in Route 53 would allow you to direct traffic to the on-premises servers based on the geographic location of
the users. However, this option does not optimize site loading times for European users as it still requires them to access the website from the on-
premises servers in the United States. It does not leverage the benefits of content caching and edge locations for improved performance.
upvoted 2 times
1 month, 1 week ago
Selected Answer: C
C is best solution.
upvoted 1 times
Community vote distribution
C (100%)
5 months, 4 weeks ago
Selected Answer: C
Within few days you can not do more than using CloudFront
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: C
C is correct answer
upvoted 1 times
7 months ago
Selected Answer: C
CloudFront = CDN Service
upvoted 3 times
7 months ago
C.
S3 Cross region Replication minimize latency but also copies objects across Amazon S3 buckets in different AWS Regions(data has to remain in
origin thou) so B wrong.
Route 53 geo, does not help reducing the latency.
upvoted 2 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 4 weeks ago
Same question with detailed explanation
https://www.examtopics.com/discussions/amazon/view/27898-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
8 months, 1 week ago
Selected Answer: C
Option C, use CloudFront.
upvoted 3 times
Topic 1
Question #84
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon
EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10%
CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans
to implement automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?
A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Spot blocks are not longer available, and you can't use spot instances on Prod machines 24x7, so option B should be valid.
upvoted 11 times
Most Recent
1 week ago
Selected Answer: B
Option B, would indeed be the most cost-effective solution. Reserved Instances provide cost savings for instances that run consistently, such as the
production environment in this case, while On-Demand Instances offer flexibility and are suitable for instances with variable usage patterns like the
development and test environments. This combination ensures cost optimization based on the specific requirements and usage patterns described
in the question.
upvoted 2 times
1 month, 1 week ago
Selected Answer: B
B meets the requirements, and most cost-effective.
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
Spot instances are not suitable for production due to the possibility of not running.
upvoted 1 times
3 months ago
Answeer B:
Sopt block are not longer available and you can't use spot instace on production
upvoted 1 times
6 months ago
Selected Answer: B
Well, AWS has DISCONTINUED the Spot-Block option. so that rules out the two options that use spot-block. Wait, this question must be from SAA-
C02 or even 01. STALE QUESTION. I don't think this will feature in SAA-C03. Anyhow, the most cost-effective solution would be Option "b"
upvoted 3 times
6 months ago
Selected Answer: B
Choosing B as spot blocks (Spot instances with a finite duration) are no longer offered since July 2021
upvoted 1 times
1 month ago
https://aws.amazon.com/ec2/spot/?cards.sort-by=item.additionalFields.startDateTime&cards.sort-order=asc&trk=8e336330-37e5-41e0-8438-
bc1c75320d09&sc_channel=ps&ef_id=CjwKCAjw67ajBhAVEiwA2g_jECgIX_lcbqawbH-
wVx2Y_EozBm8xv3g3Ci1eps0V49XcZRyfuy9xPhoCOkcQAvD_BwE:G:s&s_kwcid=AL!4422!3!517520538467!p!!g!!aws%20ec2%20spot!1283109452
0!122300635918
upvoted 1 times
Community vote distribution
B (92%)
8%
6 months, 1 week ago
Selected Answer: A
The most cost-effective solution for the company's requirements would be to use Spot Instances for the development and test EC2 instances and
Reserved Instances for the production EC2 instances.
Spot Instances are a cost-effective choice for non-critical, flexible workloads that can be interrupted. Since the development and test EC2 instances
are only needed for at least 8 hours per day and can be stopped when not in use, they would be a good fit for Spot Instances.
upvoted 2 times
6 months ago
The production EC2 instances run 24 hours a day.
upvoted 2 times
6 months, 1 week ago
Reserved Instances are a good fit for production EC2 instances that need to run 24 hours a day, as they offer a significant discount compared to
On-Demand Instances in exchange for a one-time payment and a commitment to use the instances for a certain period of time.
Option A is the correct answer because it meets the company's requirements for cost-effectively running the development and test EC2
instances and the production EC2 instances.
upvoted 1 times
6 months, 1 week ago
Option B is not the most cost-effective solution because it suggests using On-Demand Instances for the development and test EC2 instances,
which would be more expensive than using Spot Instances. On-Demand Instances are a good choice for workloads that require a guaranteed
capacity and can't be interrupted, but they are more expensive than Spot Instances.
Option C is not the correct solution because Spot blocks are a variant of Spot Instances that offer a guaranteed capacity and duration, but
they are not available for all instance types and are not necessarily the most cost-effective option in all cases. In this case, it would be more
cost-effective to use Spot Instances for the development and test EC2 instances, as they can be interrupted when not in use.
upvoted 1 times
4 months ago
Can't use Spot instances for Production environment that needs to run 24/7. That should tell you that Production instances can't have a
downtime. Spot instances are used when an application or service can allow disruption and 24/7 production environment won't allow
that.
upvoted 2 times
6 months, 1 week ago
Option D is not the correct solution because it suggests using On-Demand Instances for the production EC2 instances, which would be
more expensive than using Reserved Instances. On-Demand Instances are a good choice for workloads that require a guaranteed capacity
and can't be interrupted, but they are more expensive than Reserved Instances in the long run. Using Reserved Instances for the
production EC2 instances would offer a significant discount compared to On-Demand Instances in exchange for a one-time payment and
a commitment to use the instances for a certain period of time.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Reserved instances for 24/7 production instances seems reasonable. By exclusion I will choose the on-demand for dev and test (despite thinking
that Spot Flees may be even a better solution from a cost-wise perspective)
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
Reserved Instances and On-demand
Spot is out as the use case required continues instance running
upvoted 1 times
7 months, 3 weeks ago
B is the answer
https://www.examtopics.com/discussions/amazon/view/80956-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #85
A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new
regulatory requirement. new documents cannot be modi ed or deleted after they are stored.
What should a solutions architect do to meet this requirement?
A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.
B. Store the uploaded documents in an Amazon S3 bucket. Con gure an S3 Lifecycle policy to archive the documents periodically.
C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled. Con gure an ACL to restrict all access to read-only.
D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-
only mode.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
You can use S3 Object Lock to store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being
deleted or overwritten for a fixed amount of time or indefinitely. You can use S3 Object Lock to meet regulatory requirements that require WORM
storage, or add an extra layer of protection against object changes and deletion.
Versioning is required and automatically activated as Object Lock is enabled.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 21 times
Highly Voted
6 months, 1 week ago
Selected Answer: A
***CORRECT***
A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.
S3 Versioning allows multiple versions of an object to be stored in the same bucket. This means that when an object is modified or deleted, the
previous version is preserved. S3 Object Lock adds additional protection by allowing objects to be placed under a legal hold or retention period,
during which they cannot be deleted or modified. Together, S3 Versioning and S3 Object Lock can be used to meet the requirement of not allowing
documents to be modified or deleted after they are stored.
upvoted 5 times
6 months, 1 week ago
***WRONG***
Option B, storing the documents in an S3 bucket and configuring an S3 Lifecycle policy to archive them periodically, would not prevent the
documents from being modified or deleted.
Option C, storing the documents in an S3 bucket with S3 Versioning enabled and configuring an ACL to restrict all access to read-only, would
also not prevent the documents from being modified or deleted, since an ACL only controls access to the object and does not prevent it from
being modified or deleted.
Option D, storing the documents on an Amazon Elastic File System (Amazon EFS) volume and accessing the data in read-only mode, would
prevent the documents from being modified, but would not prevent them from being deleted.
upvoted 1 times
Most Recent
1 week ago
Selected Answer: A
S3 Versioning allows you to preserve every version of a document as it is uploaded or modified. This prevents accidental or intentional
modifications or deletions of the documents.
S3 Object Loc allows you to set a retention period or legal hold on the objects, making them immutable during the specified period. This ensures
that the stored documents cannot be modified or deleted, even by privileged users or administrators.
B. Configuring an S3 Lifecycle policy to archive documents periodically does not guarantee the prevention of document modification or deletion
after they are stored.
C. Enabling S3 Versioning alone does not prevent modifications or deletions of objects. Configuring an ACL does not guarantee the prevention of
modifications or deletions by authorized users.
D. Using EFS does not prevent modifications or deletions of the documents by users or processes with write permissions.
upvoted 2 times
1 month, 1 week ago
Community vote distribution
A (100%)
Selected Answer: A
S3 Versioning and S3 Object Lock enabled meet the requirements, hence A is correct ans.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
Option A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled. This will ensure that the
documents cannot be modified or deleted after they are stored, and will meet the regulatory requirement. S3 Versioning allows you to store
multiple versions of an object in the same bucket, and S3 Object Lock enables you to apply a retention policy to objects in the bucket to prevent
their deletion.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: A
Option A. Object Lock will prevent modifications to documents
upvoted 1 times
6 months, 3 weeks ago
Why not C
upvoted 3 times
6 months, 1 week ago
Configure an ACL to restrict all access to read-only would be you could not write the docs to the bucket in the first place.
upvoted 2 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 1 times
8 months, 1 week ago
Selected Answer: A
aaaaaaaaa
upvoted 1 times
8 months, 1 week ago
aaaaaaaaaaa
upvoted 1 times
Topic 1
Question #86
A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a
secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.
Which solution meets these requirements?
A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS
Secrets Manager.
B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to
access OpsCenter.
C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to
retrieve credentials and access the database.
D. Store the database user credentials in les encrypted with AWS Key Management Service (AWS KMS) on the web server le system. The
web server should be able to decrypt the les and access the database.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve
the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer
exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This
enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
upvoted 16 times
Most Recent
1 week ago
Selected Answer: A
B. SSM OpsCenter is primarily used for managing and resolving operational issues. It is not designed to securely store and manage credentials like
AWS Secrets Manager.
C. Storing credentials in an S3 bucket may provide some level of security, but it lacks the additional features and security controls offered by AWS
Secrets Manager.
D. While using KMS for encryption is a good practice, managing credentials directly on the web server file system can introduce complexities and
potential security risks. It can be challenging to securely manage and rotate credentials across multiple web servers, especially when considering
scalability and automation.
In summary, option A is the recommended solution as it leverages AWS Secrets Manager, which is purpose-built for securely storing and managing
secrets, and provides the necessary IAM permissions to allow the web servers to access the credentials securely.
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
Option A is ans.
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
5 months, 3 weeks ago
literally screams for AWS secrets manager to rotate the credentails
upvoted 4 times
6 months, 1 week ago
Selected Answer: A
***CORRECT***
Option A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access
AWS Secrets Manager.
Option A is correct because it meets the requirements specified in the question: a secure method for the web servers to connect to the database
Community vote distribution
A (100%)
while meeting a security requirement to rotate user credentials frequently. AWS Secrets Manager is designed specifically to store and manage
secrets like database credentials, and it provides an automated way to rotate secrets every time they are used, ensuring that the secrets are always
fresh and secure. This makes it a good choice for storing and managing the database user credentials in a secure way.
upvoted 3 times
6 months, 1 week ago
***WRONG***
Option B, storing the database user credentials in AWS Systems Manager OpsCenter, is not a good fit for this use case because OpsCenter is a
tool for managing and monitoring systems, and it is not designed for storing and managing secrets.
Option C, storing the database user credentials in a secure Amazon S3 bucket, is not a secure option because S3 buckets are not designed to
store secrets. While it is possible to store secrets in S3, it is not recommended because S3 is not a secure secrets management service and does
not provide the same level of security and automation as AWS Secrets Manager.
upvoted 3 times
6 months, 1 week ago
Option D, storing the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file
system, is not a secure option because it relies on the security of the web server file system, which may not be as secure as a dedicated
secrets management service like AWS Secrets Manager. Additionally, this option does not meet the requirement to rotate user credentials
frequently because it does not provide an automated way to rotate the credentials.
upvoted 4 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: A
Rotate credentials = Secrets Manager
upvoted 3 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: A
Answer is A
upvoted 2 times
Topic 1
Question #87
A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save
customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish
database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event.
A solutions architect needs to design a solution that stores customer data that is created during database upgrades.
Which solution will meet these requirements?
A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Con gure the Lambda functions to connect to the
RDS proxy.
B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the
database.
C. Persist the customer data to Lambda local storage. Con gure new Lambda functions to scan the local storage to save the customer data to
the database.
D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the
queue and stores the customer data in the database.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
https://aws.amazon.com/rds/proxy/
RDS Proxy minimizes application disruption from outages affecting the availability of your database by automatically connecting to a new database
instance while preserving application connections. When failovers occur, RDS Proxy routes requests directly to the new database instance. This
reduces failover times for Aurora and RDS databases by up to 66%.
upvoted 30 times
1 month, 2 weeks ago
This is incorrect as nowhere in the question is mentioned the RDS have more than 1 instance. So... when the instance is down for maintenance
there is no second instance to which RDS Proxy can redirect the requests.
The correct answer is D.
upvoted 3 times
7 months, 1 week ago
Aurora supports RDS proxy!
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 5 times
6 months ago
This is MySQL Database. RDS proxy = no no
upvoted 1 times
2 months, 2 weeks ago
It literally says RDS Proxy is available for Aurora MySQL on the link in the comment you're replying to.
upvoted 3 times
Highly Voted
8 months, 1 week ago
Selected Answer: D
The answer is D.
RDS Proxy doesn't support Aurora DBs. See limitations at:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 18 times
7 months, 1 week ago
Actually RDS Proxy supports Aurora DBs running on PostgreSQL and MySQL.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.RDS_Proxy.html
With RDS proxy, you only expose a single endpoint for request to hit and any failure of the primary DB in a Multi-AZ configuration is will be
managed automatically by RDS Proxy to point to the new primary DB. Hence RDS proxy is the most efficient way of solving the issue as
additional code change is required.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.howitworks.html
Community vote distribution
D (60%)
A (40%)
upvoted 8 times
2 months, 3 weeks ago
The question doesn't say the RDS is deployed in a Mutli-AZ mode. which means RDS is not accessible during upgrade anyway. RDS proxy
couldn't resolve the DB HA issue. The question is looking for a solution to store the data during DB upgrade. I don't know RDS proxy very
well, but the RDS proxy introduction doesn't mention it has the capability of storing data. So, answer A couldn't store the data created
during the DB upgrade.
I'm assuming this is a bad question design. The expected answer is A, but the question designer missed some important information.
upvoted 2 times
1 month, 2 weeks ago
https://aws.amazon.com/rds/proxy/, if you go down the page, you will see that RDS is deployed in Multi-AZ (mazon RDS Proxy is highly
available and deployed over multiple Availability Zones (AZs) to protect you from infrastructure failure. Each AZ runs on its own physically
distinct, independent infrastructure and is engineered to be highly reliable. In the unlikely event of an infrastructure failure, the RDS Proxy
endpoint remains online and consistent allowing your application to continue to run database operations.) from the link.
upvoted 1 times
6 months, 1 week ago
It does, according to that link
upvoted 1 times
7 months ago
You can use RDS Proxy with Aurora Serverless v2 clusters but not with Aurora Serverless v1 clusters.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 3 times
Most Recent
1 week ago
Selected Answer: D
A. It does not address the issue of storing customer data during database upgrades. The problem lies in the Lambda failing to establish
connections during upgrades.
B. Increasing the Lambda run time and implementing a retry mechanism can help mitigate some failures, but it does not provide a reliable solution
for storing customer data during database upgrades. The issue is not with the Lambda functions' execution time or retry logic, but with the
database connection failures during upgrades.
C. Lambda local storage is temporary and is not designed for durable data storage. It is not a reliable solution for persisting customer data,
especially during database upgrades.
In summary, option D is the recommended solution as it utilizes an SQS FIFO queue to store customer data. By decoupling the data storage from
the database connection, the Lambda can store the data reliably in the queue even during database upgrades. A separate Lambda can then poll
the queue and save the customer data to the database, ensuring no data loss during upgrade periods.
upvoted 2 times
1 month ago
Selected Answer: D
The best solution is to store the customer data in an Amazon SQS queue when the Lambda functions can't connect to the database during an
upgrade. A new Lambda function can then poll the SQS queue and store the customer data in the database once the upgrade is complete.
The other solution:
A) An RDS proxy would not buffer/store the data during an outage.
B) Increasing Lambda run time and retries would not store the data that fails during the retries.
C) Lambda local storage is ephemeral and data would be lost after a function execution.
upvoted 1 times
1 month ago
Selected Answer: A
https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
Supports Aurora MySQL or Amazon RDS MySQL. It is designed for that reason.
upvoted 1 times
1 month ago
Selected Answer: D
RDS proxy for HA but not suitable for store data during DB outage, I think D is the correct answer
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
Correct answer is D
Aurora supports RDS proxy
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
It is a mistake from my side, i wanted to choose D, not A.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Option A is best choice, i think.
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
During the backend database's upgrading process, user data through a RDS proxy will not be saved to the database because it is under upgrade.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
mazon RDS Proxy is highly available and deployed over multiple Availability Zones (AZs) to protect you from infrastructure failure. Each AZ runs on
its own physically distinct, independent infrastructure and is engineered to be highly reliable. In the unlikely event of an infrastructure failure, the
RDS Proxy endpoint remains online and consistent allowing your application to continue to run database operations.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
RDS Proxy is a valuable tool for managing database connectivity, but it is not designed to store data. We need to record customer data during
outage so it should be D IMO.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: D
what if all database have a specific and same time to upgrade ? so even you are using RDS proxy, you will don't store data as cache as well. using
SQS queue can help you store up to 2GB size of data before insert them in the database.
RDS proxy is good for read, not write
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: A
Ans is RDS Proxy for upgrade database
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: D
RDS is not a multi AZ architecture, So RDS instance will be down while upgrading and not accessible and RDS proxy will not be able to store any
data that is to be inserted in the RDS.
But SQS can store the data while the instance is upgrading..
upvoted 3 times
2 months ago
Selected Answer: A
letter A
upvoted 2 times
2 months ago
For me is A
The next documentation
https://aws.amazon.com/es/blogs/database/improving-application-availability-with-amazon-rds-proxy/
"Failover occurs when the primary database instance becomes inaccessible and another instance takes over as the new primary. This disrupts client
connections. Failovers can be planned, when they are induced by administrative actions such as a rolling upgrade, or unplanned, when they occur
due to failures. In both cases, you want to reduce downtime to minimize client disruption."
upvoted 2 times
Topic 1
Question #88
A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that
is 3 TB in size and growing. The company has started to share the data with a European marketing rm that has S3 buckets. The company wants
to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?
A. Con gure the Requester Pays feature on the company's S3 bucket.
B. Con gure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing rm's S3 buckets.
C. Con gure cross-account access for the marketing rm so that the marketing rm has access to the company's S3 bucket.
D. Con gure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing rm's S3 buckets.
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
this question is too vague imho
if the question is looking for a way to incur charges to the European company instead of the US company, then requester pay makes sense.
if they are looking to reduce overall data transfer cost, then B makes sense because the data does not leave the AWS network, thus data transfer
cost should be lower technically?
A. makes sense because the US company saves money, but the European company is paying for the charges so there is no overall saving in cost
when you look at the big picture
I will go for B because they are not explicitly stating that they want the other company to pay for the charges
upvoted 38 times
1 month, 4 weeks ago
Agree, B) Cross Region Replication: $0.02/GB
A) over the internet it is $0.09/GB
Answer is B
upvoted 4 times
6 months ago
I disagree. The question says, "the company wants to ensure that ITS data transfer costs remain as low as possible" -- 'it' being the US company.
The question would have stayed "ensure that data transfer costs" (without the word 'its') if they meant the overall data transfer cost.
upvoted 11 times
3 months, 3 weeks ago
I concur with your explanation 100%
upvoted 1 times
Highly Voted
8 months, 1 week ago
Selected Answer: A
"Typically, you configure buckets to be Requester Pays buckets when you want to share data but not incur charges associated with others accessing
the data. For example, you might use Requester Pays buckets when making available large datasets, such as zip code directories, reference data,
geospatial information, or web crawling data."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 22 times
Most Recent
1 day, 21 hours ago
Selected Answer: A
Make the requester pay feature will cost 0, which is the cheapest.
upvoted 1 times
1 week ago
Selected Answer: C
A. Enabling the Requester Pays feature would shift the data transfer costs to the European marketing firm, but it may not be the most cost-effective
solution.
B. Enabling cross-region replication would copy the data from the company's S3 to the marketing firm's S3, but it would incur additional data
transfer costs. This solution doesn't focus on minimizing data transfer costs for the company.
D. Using S3 Intelligent-Tiering and syncing the bucket to the marketing firm's S3 may help optimize storage costs by automatically moving objects
Community vote distribution
A (52%)
B (45%)
to the most cost-effective storage class. However, it does not specifically address the goal of minimizing data transfer costs for the company.
In summary, option C is the recommended solution as it allows the marketing firm to access the company's S3 through cross-account access. This
enables the marketing firm to retrieve the data directly from the company's bucket without incurring additional data transfer costs. It ensures that
the survey company retains control over its data and can minimize its own data transfer expenses.
upvoted 2 times
1 week, 2 days ago
B makes more sense, the question didn't state you should do away with the cost completely
upvoted 1 times
3 weeks, 3 days ago
Selected Answer: B
The question demands you to find a way to decrease the expense, not to transfer the expense to someone.)
Cross Region Replication: $0.02/GB
Over the internet, it is $0.09/GB
upvoted 1 times
4 weeks ago
Selected Answer: A
European company should pay for the transfer costs.
upvoted 1 times
1 month ago
Selected Answer: A
With Requester Pays, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
the bucket owner cost is minimum when the receiving party pays the transfer cost. hence, A.
for no party mentioned, the option B will be the correct answer
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
The key words here are "data transfer costs" remain as low as possible, hence B in correct option.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
I will go for option B.
upvoted 1 times
1 month, 1 week ago
Answer C
A. cost would be their still on the department of the firm
B. Replication will post marketing costs.
C. just add access and you are good to go
D. Naah
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
(A) can't be used as soon it's not specified if the cost optimization is the target is for one company of for both.
As a Solution Architects we need to realize the business context. Requestor pays means the partner pays for the service. And this becomes the part
of the expense of the overall business model. Since the requirement does not specify this, we can't assume the cost optimization is for the survey
company only. Hence B as it optimizes cost in total
upvoted 1 times
2 months ago
Selected Answer: B
A is wrong because the question doesn't specify which company needs to save costs.
upvoted 1 times
2 months ago
Selected Answer: A
Requester
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
They are different companies, the American company wants to reduce its own costs, not the European company's costs.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
Correct Answer : A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
upvoted 1 times
Topic 1
Question #89
A company uses Amazon S3 to store its con dential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM
user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3
bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?
A. Enable the versioning and MFA Delete features on the S3 bucket.
B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates.
D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS
key.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
Same as Question #44
upvoted 10 times
Most Recent
1 week ago
B. Enabling MFA on the IAM user credentials adds an extra layer of security to the user authentication process. However, it does not specifically
address the concern of accidental deletion of documents in the S3 bucket.
C. Adding an S3 Lifecycle policy to deny the delete action during audit dates would prevent intentional deletions during specific time periods.
However, it does not address accidental deletions that can occur at any time.
D. Using KMS for encryption and restricting access to the KMS key provides additional security for the data stored in the S3 . However, it does not
directly prevent accidental deletion of documents in the S3.
Enabling versioning and MFA Delete on the S3 (option A) is the most appropriate solution for securing the audit documents. Versioning ensures
that multiple versions of the documents are stored, allowing for easy recovery in case of accidental deletions. Enabling MFA Delete requires the use
of multi-factor authentication to authorize deletion actions, adding an extra layer of protection against unintended deletions.
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
A is answer.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
A is answer.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
A is correct.
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
only accidental deletion should be avoided. IAM policy will completely remove their access.hence, MFA is the right choice.
upvoted 1 times
5 months, 2 weeks ago
what about : IAM policies are used to specify permissions for AWS resources, and they can be used to allow or deny specific actions on those
resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyDeleteObject",
"Effect": "Deny",
Community vote distribution
A (100%)
"Action": "s3:DeleteObject",
"Resource": [
"arn:aws:s3:::my-bucket/my-object",
"arn:aws:s3:::my-bucket"
]
}
]
}
upvoted 2 times
5 months, 1 week ago
only accidental deletion should be avoided. IAM policy will completely remove their access.hence, MFA is the right choice.
upvoted 1 times
6 months ago
Selected Answer: A
The solution architect should do Option A: Enable the versioning and MFA Delete features on the S3 bucket.
This will secure the audit documents by providing an additional layer of protection against accidental deletion. With versioning enabled, any
deleted or overwritten objects in the S3 bucket will be preserved as previous versions, allowing the company to recover them if needed. With MFA
Delete enabled, any delete request made to the S3 bucket will require the use of an MFA code, which provides an additional layer of security.
upvoted 2 times
6 months ago
Option B: Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account, would not provide
protection against accidental deletion.
Option C: Adding an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates, which
would not provide protection against accidental deletion outside of the specified audit dates.
Option D: Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the
KMS key, would not provide protection against accidental deletion.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: A
A is the right answer
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
Enable the versioning and MFA Delete features on the S3 bucket.
upvoted 1 times
Topic 1
Question #90
A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance.
A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must
report a nal total during business hours.
The company's development team notices that the database performance is inadequate for development tasks when the script is running. A
solutions architect must recommend a solution to resolve this issue.
Which solution will meet this requirement with the LEAST operational overhead?
A. Modify the DB instance to be a Multi-AZ deployment.
B. Create a read replica of the database. Con gure the script to query only the read replica.
C. Instruct the development team to manually export the entries in the database at the end of each day.
D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
Elasti Cache if for reading common results. The script is looking for new movies added. Read replica would be the best choice.
upvoted 23 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: B
• You have a production DB that is taking on a normal load
• You want to run a reporting application to run some analytics
• You create a read replica to run the new workload there
• The prod application is unaffected
• Read replicas are used for SELECT (=read) only kind of statements
Therefore I believe B to be the better answer.
As for "D" - ElastiCache use cases are:
1. Your data is slow or expensive to get when compared to cache retrieval.
2. Users access your data often.
3. Your data stays relatively the same, or if it changes quickly staleness is not a large issue.
1 - Somewhat true.
2 - Not true for our case.
3 - Also not true. The data changes throughout the day.
For my understanding, caching has to do with millisecond results, high-performance reads. These are not the issues mentioned in the questions,
therefore B.
upvoted 10 times
4 months, 4 weeks ago
I will support this by point to the question : " with the LEAST operational overhead?"
Configuring the read replica is much easier than configuring and integrating new service.
upvoted 1 times
Most Recent
1 week ago
Selected Answer: B
A. Modifying the DB to be a Multi-AZ deployment improves high availability and fault tolerance but does not directly address the performance
issue during the script execution.
C. Instructing the development team to manually export the entries in the database introduces manual effort and is not a scalable or efficient
solution.
D. While using ElastiCache for caching can improve read performance for common queries, it may not be the most suitable solution for the
scenario described. Caching is effective for reducing the load on the database for frequently accessed data, but it may not directly address the
performance issue during the script execution.
Creating a read replica of the database (option B) provides a scalable solution that offloads read traffic from the primary database. The script can
be configured to query the read replica, reducing the impact on the primary database during the script execution.
upvoted 2 times
Community vote distribution
B (95%)
5%
1 month ago
Selected Answer: B
For LEAST operational overhead, I recommended to use read replica DB
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
The option B will reduce burden on DB, becase the script will read only from replica, not from DB, hence option B is correct answer.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
B is correct. Read replica for read only script any analytical loads.
upvoted 1 times
2 months ago
Selected Answer: B
B is correct. Run the script on the read replica.
upvoted 1 times
3 months ago
B:
read replica would be the best choice
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
Reason to have a Read Replica is improved performance (key word) which is native to RDS. Elastic Cache may have misses.
The other way of looking at this question is : Elastic Cache could be beneficial for development tasks (and hence improve the overall DB
performance). But then, Option D mentions that the queries for scripts are cached, and not the DB content (or metadata). This may not necessarily
improve the performance of the DB.
So, Option B is the best answer.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
The correct answer would be option B
upvoted 1 times
6 months ago
Selected Answer: B
D is incorrect. The requirement says LEAST OPERATIONAL OVERHEAD. Here, using Elasticache you need to heavily modify your scripts/code to
accommodate Elasticache into the architecture which is higher Operational overhead compared to turning DB into Muti-AZ mode.
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
***CORRECT***
The best solution to meet the requirement with the least operational overhead would be to create a read replica of the database and configure the
script to query only the read replica. Option B.
A read replica is a fully managed database that is kept in sync with the primary database. Read replicas allow you to scale out read-heavy
workloads by distributing read queries across multiple databases. This can help improve the performance of the database and reduce the impact
on the primary database.
By configuring the script to query the read replica, the development team can continue to use the primary database for development tasks, while
the script's queries will be directed to the read replica. This will reduce the load on the primary database and improve its performance.
upvoted 6 times
6 months, 1 week ago
***WRONG***
Option A (modifying the DB instance to be a Multi-AZ deployment) would not address the issue of the script's queries impacting the primary
database.
Option C (instructing the development team to manually export the entries in the database at the end of each day) would not be an efficient
solution as it would require manual effort and could lead to data loss if the export process is not done properly.
Option D (using Amazon ElastiCache to cache the common queries) could improve the performance of the script's queries, but it would not
address the issue of the script's queries impacting the primary database.
upvoted 4 times
6 months, 1 week ago
b is correct
Amazon RDS Read Replicas provide enhanced performance and durability for Amazon RDS database (DB) instances. They make it easy to elastically
scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a
given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read
throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for
MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
upvoted 1 times
6 months, 1 week ago
D is not reducing operational overhead, since there is development effort to integrate the app to a cache. you have to manage the cluster of the
elastic cache
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
It's a DB instance not managed instance so you can't use a read replica.
upvoted 1 times
6 months, 1 week ago
The script makes two tasks. Firsts, the script runs queries RECORD the number of new movies that have been added to the database. In the second
task, the script must report a final total. The question ask about how to improve the database behavior when this script is running. I don't know if B
is a valid answer because you can not RECORD in a only-write database. But the other 3 options makes no sense for me too. So, it's difficult give a
certain answer.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
B - Add read replica and run the script against read replica endpoints.
upvoted 1 times
Topic 1
Question #91
A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and
read objects. According to the company's security regulations, no tra c from the applications is allowed to travel across the internet.
Which solution will meet these requirements?
A. Con gure an S3 gateway endpoint.
B. Create an S3 bucket in a private subnet.
C. Create an S3 bucket in the same AWS Region as the EC2 instances.
D. Con gure a NAT gateway in the same subnet as the EC2 instances.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Gateway endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT device for your
VPC. It should be option A.
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 17 times
Highly Voted
6 months, 1 week ago
Selected Answer: A
***CORRECT***
The correct solution is Option A (Configure an S3 gateway endpoint.)
A gateway endpoint is a VPC endpoint that you can use to connect to Amazon S3 from within your VPC. Traffic between your VPC and Amazon S3
never leaves the Amazon network, so it doesn't traverse the internet. This means you can access Amazon S3 without the need to use a NAT
gateway or a VPN connection.
***WRONG***
Option B (creating an S3 bucket in a private subnet) is not a valid solution because S3 buckets do not have subnets.
Option C (creating an S3 bucket in the same AWS Region as the EC2 instances) is not a requirement for meeting the given security regulations.
Option D (configuring a NAT gateway in the same subnet as the EC2 instances) is not a valid solution because it would allow traffic to leave the
VPC and travel across the Internet.
upvoted 8 times
Most Recent
6 days, 17 hours ago
B. Creating an S3 in a private subnet restricts direct internet access to the bucket but does not provide a direct and secure connection between the
EC2and the S3. The application would still need to traverse the internet to access the S3 API.
C. Creating an S3 in the same Region as the EC2 does not inherently prevent traffic from traversing the internet.
D. Configuring a NAT gateway allows outbound internet connectivity for resources in private subnets, but it does not provide a direct and secure
connection to the S3 service. The traffic from the EC2 to the S3 API would still traverse the internet.
The most suitable solution is to configure an S3 gateway endpoint (option A). It provides a secure and private connection between the VPC and the
S3 service without requiring the traffic to traverse the internet. With an S3 gateway endpoint, the EC2 can access the S3 API directly within the VPC,
meeting the security requirement of preventing traffic from traveling across the internet.
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
Configure an S3 gateway endpoint is answer.
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: A
S3 Gateway Endpoint is a VPC endpoint,
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: A
Community vote distribution
A (100%)
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #92
A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the
application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)
A. Con gure a VPC gateway endpoint for Amazon S3 within the VPC.
B. Create a bucket policy to make the objects in the S3 bucket public.
C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.
Correct Answer:
AC
6 days, 17 hours ago
Selected Answer: AC
A. This eliminates the need for the traffic to go over the internet, providing an added layer of security.
B. It is important to restrict access to the bucket and its objects only to authorized entities.
C. This helps maintain the confidentiality of the sensitive user information by limiting access to authorized resources.
D. In this case, since the EC2 instances are accessing the S3 bucket from within the VPC, using IAM user credentials is unnecessary and can
introduce additional security risks.
E. a NAT instance to access the S3 bucket adds unnecessary complexity and overhead.
In summary, the recommended steps to provide secure access to the S3 from the application tier running on EC2 inside a VPC are to configure a
VPC gateway endpoint for S3 within the VPC (option A) and create a bucket policy that limits access to only the application tier running in the VPC
(option C).
upvoted 2 times
1 month, 1 week ago
Selected Answer: AC
A & C the correct solutions.
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: AC
A and C
upvoted 1 times
3 months ago
Selected Answer: AC
A and C
upvoted 1 times
4 months, 1 week ago
Selected Answer: AC
The key part that many miss out on is 'Combination'
The other answers are not wrong but
A works with C and not with the rest as they need an internet connection.
upvoted 2 times
4 months, 1 week ago
Selected Answer: AC
AC is correct
upvoted 1 times
4 months, 1 week ago
Selected Answer: AC
https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-noauthentication/
upvoted 2 times
Community vote distribution
AC (82%)
CD (18%)
5 months, 1 week ago
Selected Answer: CD
c & D for security. A addresses accessibility which is not a concern here imo
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: AC
A & C.
When the question is about security, do not select the answer that storing credential in EC2. This shall be done by using IAM policy + role or Secret
Manager.
upvoted 2 times
5 months, 3 weeks ago
C and D
To provide secure access to the S3 bucket from the application tier running on EC2 instances inside a VPC, you should create a bucket policy that
limits access to only the application tier running in the VPC. This will ensure that only the application tier has access to the bucket and its contents.
Additionally, you should create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance. This will allow the EC2
instance to access the S3 bucket using the IAM user's permissions.
Option A, configuring a VPC gateway endpoint for Amazon S3 within the VPC, would not provide any additional security for the S3 bucket.
Option B, creating a bucket policy to make the objects in the S3 bucket public, would not provide sufficient security for sensitive user information.
Option E, creating a NAT instance and having the EC2 instances use the NAT instance to access the S3 bucket, would not provide any additional
security for the S3 bucket
upvoted 1 times
6 months ago
Selected Answer: AC
A and C is right among the choice.
But instead of having bucket policy for VPC access better option would be to create a role with specific S3 bucket access and attach that role EC2
instances that needs access to S3 buckets.
upvoted 3 times
6 months ago
Selected Answer: AC
A & C looks correct
upvoted 1 times
6 months, 1 week ago
Selected Answer: CD
***CORRECT***
The solutions architect should take the following steps to accomplish secure access to the S3 bucket from the application tier running on Amazon
EC2 instances inside a VPC:
C. Create a bucket policy that limits access to only the application tier running in the VPC.
D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
upvoted 3 times
6 months ago
After reviewing thoroughly the AWS documentation and the other answers in the discussion, I am taking back my previous answer. The correct
answer for me is Option A and Option C.
To provide secure access to the S3 bucket from the application tier running on Amazon EC2 instances inside the VPC, the solutions architect
should take the following combination of steps:
Option A: Configure a VPC gateway endpoint for Amazon S3 within the VPC.
Amazon S3 VPC Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
Option C: Create a bucket policy that limits access to only the application tier running in the VPC.
Amazon S3 Bucket Policies: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html
AWS Identity and Access Management (IAM) Policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
upvoted 6 times
6 months, 1 week ago
***INCORRECT***
Option C ensures that the S3 bucket is only accessible to the application tier running in the VPC, while Option D allows the EC2 instances to
access the S3 bucket using the IAM credentials of the IAM user. This ensures that access to the S3 bucket is secure and controlled through IAM.
Option A is incorrect because configuring a VPC gateway endpoint for Amazon S3 does not directly control access to the S3 bucket.
Option B is incorrect because making the objects in the S3 bucket public would not provide secure access to the bucket.
Option E is incorrect because creating a NAT instance is not necessary to provide secure access to the S3 bucket from the application tier
running on EC2 instances in the VPC.
upvoted 1 times
7 months ago
Selected Answer: AC
Option AC
upvoted 1 times
7 months, 1 week ago
A and C
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: AC
AC is the correct answer in the use case
upvoted 1 times
7 months, 2 weeks ago
Options A and E
upvoted 1 times
7 months, 2 weeks ago
Typo it should be A and C
upvoted 1 times
Topic 1
Question #93
A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase
the application's elasticity and availability.
The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company's development
team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience
unacceptable application latency. The development team is unable to use the staging environment until the procedure completes.
A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also
must give the development team the ability to continue using the staging environment without delay.
Which solution meets these requirements?
A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and
restore process that uses the mysqldump utility.
B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.
C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging
database.
D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a
backup and restore process that uses the mysqldump utility.
Correct Answer:
B
Highly Voted
6 months, 1 week ago
Selected Answer: B
The recommended solution is Option B: Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create
the staging database on-demand.
To alleviate the application latency issue, the recommended solution is to use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production,
and use database cloning to create the staging database on-demand. This allows the development team to continue using the staging
environment without delay, while also providing elasticity and availability for the production application.
Therefore, Options A, C, and D are not recommended
upvoted 9 times
6 months, 1 week ago
Option A: Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populating the staging database by implementing a
backup and restore process that uses the mysqldump utility is not the recommended solution because it involves taking a full export of the
production database, which can cause unacceptable application latency.
Option C: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Using the standby instance for the staging
database is not the recommended solution because it does not give the development team the ability to continue using the staging
environment without delay. The standby instance is used for failover in case of a production instance failure, and it is not intended for use as a
staging environment.
upvoted 9 times
6 months, 1 week ago
Option D: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populating the staging database by
implementing a backup and restore process that uses the mysqqldump utility is not the recommended solution because it involves taking a
full export of the production database, which can cause unacceptable application latency.
upvoted 5 times
Most Recent
6 days, 17 hours ago
Selected Answer: B
A. Populating the staging database through a backup and restore process using the mysqldump utility would still result in delays and impact
application latency.
B. With Aurora, you can create a clone of the production database quickly and efficiently, without the need for time-consuming backup and restore
processes. The development team can spin up the staging database on-demand, eliminating delays and allowing them to continue using the
staging environment without interruption.
C. Using the standby instance for the staging database would not provide the development team with the ability to use the staging environment
without delay. The standby instance is designed for failover purposes and may not be readily available for immediate use.
D. Relying on a backup and restore process using the mysqldump utility would still introduce delays and impact application latency during the data
population phase.
Community vote distribution
B (89%)
11%
upvoted 2 times
2 months, 4 weeks ago
Selected Answer: B
With Amazon Aurora MySQL, creating a staging database using database cloning is an easy process. Using database cloning will eliminate the
performance issues that occur when a full export is done, and the new database is created. In addition, Amazon Aurora's high availability is
provided through Multi-AZ deployment, and read replicas can be used to serve the heavy read traffic without affecting the production database.
This solution provides better scalability, elasticity, and availability than the current architecture.
upvoted 3 times
3 months ago
Answer B:
upvoted 1 times
4 months, 1 week ago
Selected Answer: B
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
upvoted 3 times
4 months, 2 weeks ago
Selected Answer: B
Database cloning is the best answer
upvoted 1 times
6 months ago
Selected Answer: B
Database cloning is right answer here.
upvoted 1 times
6 months, 2 weeks ago
Option B is right.
You can not access Standby instance for Read in RDS Multi-AZ Deployments.
upvoted 3 times
6 months, 1 week ago
This is correct, stand by instances cannot be used for read/write and is for failover targets. Read Replicas can be used for that so B is correct.
upvoted 2 times
6 months, 1 week ago
In a RDS Multi-AZ deployment, you can use the standby instance for read-only purposes, such as running queries and reporting. This is known
as a "read replica." You can create one or more read replicas of a DB instance and use them to offload read traffic from the primary instance.
https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-support-multi-az-deployments/
upvoted 3 times
6 months, 2 weeks ago
Selected Answer: C
why not C
upvoted 3 times
7 months ago
Selected Answer: B
Option B
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
Amazon Aurora Fast Database Cloning is what is required here.
https://aws.amazon.com/blogs/aws/amazon-aurora-fast-database-cloning/
upvoted 1 times
8 months ago
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html
upvoted 2 times
8 months, 2 weeks ago
Selected Answer: B
B
Database cloning
upvoted 4 times
Topic 1
Question #94
A company is designing an application where users upload small les into Amazon S3. After a user uploads a le, the le requires one-time simple
processing to transform the data and save the data in JSON format for later analysis.
Each le must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of les.
On other days, users will upload a few les or no les.
Which solution meets these requirements with the LEAST operational overhead?
A. Con gure Amazon EMR to read text les from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON le in an
Amazon Aurora DB cluster.
B. Con gure Amazon S3 to send an event noti cation to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances
to read from the queue and process the data. Store the resulting JSON le in Amazon DynamoDB.
C. Con gure Amazon S3 to send an event noti cation to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda
function to read from the queue and process the data. Store the resulting JSON le in Amazon DynamoDB.
D. Con gure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new le is
uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON le in an
Amazon Aurora DB cluster.
Correct Answer:
C
Highly Voted
8 months ago
Option C
Dynamo DB is a NoSQL-JSON supported
upvoted 9 times
8 months ago
also Use an AWS Lambda - serverless - less operational overhead
upvoted 8 times
Most Recent
6 days, 16 hours ago
Selected Answer: C
A. Configuring EMR and an Aurora DB cluster for this use case would introduce unnecessary complexity and operational overhead. EMR is typically
used for processing large datasets and running big data frameworks like Apache Spark or Hadoop.
B. While using S3 event notifications and SQS for decoupling is a good approach, using EC2 to process the data would introduce operational
overhead in terms of managing and scaling the EC2.
D. Using EventBridge and Kinesis Data Streams for this use case would introduce additional complexity and operational overhead compared to the
other options. EventBridge and Kinesis are typically used for real-time streaming and processing of large volumes of data.
In summary, option C is the recommended solution as it provides a serverless and scalable approach for processing uploaded files using S3 event
notifications, SQS, and Lambda. It offers low operational overhead, automatic scaling, and efficient handling of varying demand. Storing the
resulting JSON file in DynamoDB aligns with the requirement of saving the data for later analysis.
upvoted 2 times
1 month, 1 week ago
Selected Answer: C
Option C is correct - Dynamo DB is a NoSQL-JSON supported
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
SQS + LAMDA + JSON >>>>>> Dynamo DB
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
The option C is right answer.
upvoted 1 times
2 months ago
can someone explain why SQS? it's a poll-based messaging, does it guarantee reacting the event asap?
Community vote distribution
C (100%)
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: C
Dynamo DB is a NoSQL-JSON supported
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C, Configuring Amazon S3 to send an event notification to an Amazon Simple Queue Service (SQS) queue and using an AWS Lambda
function to read from the queue and process the data, would likely be the solution with the least operational overhead.
AWS Lambda is a serverless computing service that allows you to run code without the need to provision or manage infrastructure. When a new file
is uploaded to Amazon S3, it can trigger an event notification which sends a message to an SQS queue. The Lambda function can then be set up to
be triggered by messages in the queue, and it can process the data and store the resulting JSON file in Amazon DynamoDB.
upvoted 2 times
6 months, 1 week ago
Using a serverless solution like AWS Lambda can help to reduce operational overhead because it automatically scales to meet demand and does
not require you to provision and manage infrastructure. Additionally, using an SQS queue as a buffer between the S3 event notification and the
Lambda function can help to decouple the processing of the data from the uploading of the data, allowing the processing to happen
asynchronously and improving the overall efficiency of the system.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
Option C as JSON is supported by DynamoDB. RDS or AuroraDB are not suitable for JSON data.
A - Because this is not a Bigdata analytics usecase.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
CCCCCCCC
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
Answer C
upvoted 1 times
7 months ago
Selected Answer: C
answer is C
upvoted 1 times
7 months ago
Selected Answer: C
cccccccccccc
upvoted 1 times
7 months ago
Selected Answer: C
Option C
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/67958-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
SQS + LAMDA + JSON to Dynamo DB
upvoted 1 times
Topic 1
Question #95
An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB
instance. The operations team has isolated an application performance slowdown and wants to separate read tra c from write tra c. A solutions
architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?
A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.
B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.
C. Create read replicas for the database. Con gure the read replicas with half of the compute and storage resources as the source database.
D. Create read replicas for the database. Con gure the read replicas with the same compute and storage resources as the source database.
Correct Answer:
D
Highly Voted
6 months, 1 week ago
Selected Answer: D
The solutions architect should recommend option D: Create read replicas for the database. Configure the read replicas with the same compute and
storage resources as the source database.
Creating read replicas allows the application to offload read traffic from the source database, improving its performance. The read replicas should
be configured with the same compute and storage resources as the source database to ensure that they can handle the read workload effectively.
upvoted 8 times
Most Recent
6 days, 16 hours ago
Selected Answer: D
A. In a Multi-AZ deployment, a standby replica of the database is created in a different AZ for high availability and automatic failover purposes.
However, serving read requests from the primary AZ alone would not effectively separate read and write traffic. Both read and write traffic would
still be directed to the primary database instance, which might not fully optimize performance.
B. The secondary instance in a Multi-AZ deployment is intended for failover and backup purposes, not for actively serving read traffic. It operates in
a standby mode and is not optimized for handling read queries efficiently.
C. Configuring the read replicas with half of the compute and storage resources as the source database might not be optimal. It's generally
recommended to configure the read replicas with the same compute and storage resources as the source database to ensure they can handle the
read workload effectively.
D. Configuring the read replicas with the same compute and storage resources as the source database ensures that they can handle the read
workload efficiently and provide the required performance boost.
upvoted 2 times
1 month, 1 week ago
Selected Answer: D
D meets the requiremets.
upvoted 1 times
1 month, 2 weeks ago
Option C suggests creating read replicas for the database and configuring them with half of the compute and storage resources as the source
database. This is a better option as it allows read traffic to be offloaded from the primary database, separating read traffic from write traffic.
Configuring the read replicas with half the resources will also save on costs.
upvoted 1 times
1 month ago
Err, just curious, what if the production database is 51% full? Your half storage read replica would explode…?
upvoted 3 times
3 months ago
Can anyone explain why B is not an option?
upvoted 4 times
2 months, 2 weeks ago
Multi-AZ: Synchronous replication occurs, meaning that synchronizing data between DB instances immediately can slow down application's
performance. But this method increases High Availability.
Read Replicas: Asynchronous replication occurs, meaning that replicating data in other moments rather than in the writing will maintain
application's performance. Although the data won't be HA as Multi-AZ kind of deployment, this method increases Scalability. Good for read
heavy workloads.
Community vote distribution
D (100%)
upvoted 3 times
3 months ago
CHATGPT says:
To optimize the application's performance and separate read traffic from write traffic, the solutions architect should recommend creating read
replicas for the database and configuring them to serve read requests. Option C and D both suggest creating read replicas, but option D is a
better choice because it configures the read replicas with the same compute and storage resources as the source database.
Option A and B suggest changing the existing database to a Multi-AZ deployment, which would provide high availability by replicating the
database across multiple Availability Zones. However, it would not separate read and write traffic, so it is not the best solution for optimizing
application performance in this scenario.
upvoted 4 times
3 months, 1 week ago
You can create up to 15 read replicas from one DB instance within the same Region. For replication to operate effectively, each read replica should
have the same amount of compute and storage resources as the source DB instance. If you scale the source DB instance, also scale the read
replicas.
upvoted 2 times
1 month, 1 week ago
I think for RDS it is 5 read replicas. 15 is for aurora serverless
upvoted 1 times
7 months ago
Selected Answer: D
Option D
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 3 weeks ago
D
https://www.examtopics.com/discussions/amazon/view/46461-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
7 months, 4 weeks ago
Selected Answer: D
If you scale the source DB instance, also scale the read replicas.
upvoted 2 times
8 months, 1 week ago
Selected Answer: D
D is correct.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_MySQL.Replication.ReadReplicas.html
upvoted 2 times
Topic 1
Question #96
An Amazon EC2 administrator created the following policy associated with an IAM group containing several users:
What is the effect of this policy?
A. Users can terminate an EC2 instance in any AWS Region except us-east-1.
B. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region.
C. Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
D. Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.
Correct Answer:
C
Highly Voted
5 months, 3 weeks ago
What the policy means:
1. Allow termination of any instance if user’s source IP address is 100.100.254.
2. Deny termination of instances that are not in the us-east-1 Combining this two, you get:
“Allow instance termination in the us-east-1 region if the user’s source IP address is 10.100.100.254. Deny termination operation on other regions.”
upvoted 15 times
1 month, 2 weeks ago
Nice explanation. Thanks
upvoted 2 times
Highly Voted
6 months, 4 weeks ago
C is correct.
0.0/24 , the following five IP addresses are reserved:
0.0: Network address.
0.1: Reserved by AWS for the VPC router.
0.2: Reserved by AWS. The IP address of the DNS server is the base of the VPC network range plus two. ...
0.3: Reserved by AWS for future use.
0.255: Network broadcast address.
upvoted 11 times
1 month, 1 week ago
A good explanation!
Community vote distribution
C (69%)
D (31%)
upvoted 2 times
Most Recent
1 month, 1 week ago
Selected Answer: C
Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254. Option C is correct
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254. Option C is right one.
upvoted 1 times
1 month, 2 weeks ago
I think D
upvoted 1 times
2 months, 1 week ago
Selected Answer: C
its C
Deny & NOT Equal = CAN (basic logic folks)
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: C
Oh... tricky.. TT... C is correct ...
upvoted 1 times
2 months, 3 weeks ago
It's C:
deny all ec2 if StringEquals: means deny everything unless the region is us-east-1
upvoted 1 times
3 months ago
Answer C:
upvoted 1 times
3 months, 2 weeks ago
C is correct answer
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
10.100.100.254 is within the allowed CIDR block. However, it's in us-eas-1 region and deny rules all
upvoted 3 times
2 months, 1 week ago
The deny rule blocks everyone EXCEPT us-east-1 from deleting EC2 instances.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: C
IAM Conditions mean you can choose to grant/deny access to principals only if specified conditions are met.
In our case, StringNotEquals "us-east-1" means deny everything unless the region is us-east-1
An easier way to understand it but less effective ofcourse to achieve the same result would be configuring deny all ec2 if StringEquals: *state any
other region except for us-east-1*
Correct answer is C
upvoted 1 times
4 months, 4 weeks ago
D is correct
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
Deny overrules Allow. The first statement allows 100.100.254. but the second statement is denied which is the region us-east-1.
upvoted 3 times
2 months, 1 week ago
The deny applies to all regions that are not us-east-1, therefore, us-east-1 is allowed.
upvoted 1 times
2 months, 4 weeks ago
StringNotEqual
upvoted 4 times
5 months, 2 weeks ago
Deny overrules Allow. The first statement allows 100.100.254. but the second statement is denied which is the region us-east-1.
upvoted 2 times
5 months, 2 weeks ago
Please disregard the initial answer. D is the CORRECT answer.
upvoted 2 times
5 months, 2 weeks ago
C is the correct answer.
upvoted 2 times
Topic 1
Question #97
A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared le storage. The company
wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and
integrated with Active Directory for access control.
Which solution will satisfy these requirements?
A. Con gure Amazon EFS storage and set the Active Directory domain for authentication.
B. Create an SMB le share on an AWS Storage Gateway le gateway in two Availability Zones.
C. Create an Amazon S3 bucket and con gure Microsoft Windows Server to mount it as a volume.
D. Create an Amazon FSx for Windows File Server le system on AWS and set the Active Directory domain for authentication.
Correct Answer:
D
Highly Voted
6 months, 1 week ago
Selected Answer: D
D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
Amazon FSx for Windows File Server is a fully managed file storage service that is designed to be used with Microsoft Windows workloads. It is
integrated with Active Directory for access control and is highly available, as it stores data across multiple availability zones. Additionally, FSx can be
used to migrate data from on-premises Microsoft Windows file servers to the AWS Cloud. This makes it a good fit for the requirements described
in the question.
upvoted 10 times
Most Recent
6 days, 15 hours ago
Selected Answer: D
A. EFS does not provide native integration with AD for access control. While you can configure EFS to work with AD, it requires additional setup and
is not as straightforward as using a dedicated Windows file system like FSx for Windows File Server.
B. It may introduce additional complexity for this use case. Creating an SMB file share using AWS Storage Gateway would require maintaining the
gateway and managing the synchronization between on-premises and AWS storage.
C. S3 does not natively provide the SMB file protocol required for MS SharePoint and Windows shared file storage. While it is possible to mount an
S3 as a volume using 3rd-party tools or configurations, it is not the recommended.
D. FSx for Windows File Server is a fully managed, highly available file storage service that is compatible with MSWindows shared file storage
requirements. It provides native integration with AD, allowing for seamless access control and authentication using existing AD user accounts.
upvoted 2 times
2 months ago
Selected Answer: D
D is correct. FSx is for windows and supports AD authentication
upvoted 1 times
2 months, 1 week ago
Why not B? Migrating the workload? Maybe is needed a hybrid cloud solution
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
One solution that can satisfy the mentioned requirements is to use Amazon FSx for Windows File Server. Amazon FSx is a fully managed service
that provides highly available and scalable file storage for Windows-based applications. It is designed to be fully integrated with Active Directory,
which allows you to use your existing domain users and groups to control access to your file shares.
Amazon FSx provides the ability to migrate data from on-premises file servers to the cloud, using tools like AWS DataSync, Robocopy or
PowerShell. Once the data is migrated, you can continue to use the same tools and processes to manage and access the file shares as you would
on-premises.
Amazon FSx also provides features such as automatic backups, data encryption, and native multi-Availability Zone (AZ) deployments for high
availability. It can be easily integrated with other AWS services, such as Amazon S3, Amazon EFS, and AWS Backup, for additional functionality and
backup options.
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Community vote distribution
D (100%)
FSx is for Windows
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Im going for D as the answer because FXs is compatible with windows
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: D
Answer is D
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 2 weeks ago
Window only available for using FSx
upvoted 3 times
7 months, 3 weeks ago
D. Windows is the keyword
https://www.examtopics.com/discussions/amazon/view/29780-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months, 3 weeks ago
EFS is for Linux
FSx is for Windows
upvoted 6 times
7 months, 4 weeks ago
Selected Answer: D
DDDDDDDD
upvoted 1 times
8 months ago
Correct Answer:D
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html
upvoted 2 times
Topic 1
Question #98
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3
bucket. The company has set up S3 event noti cations to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS)
standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users
through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are
invoking the Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?
A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.
C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window
timeout.
D. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: C
answer should be C,
users get duplicated messages because -> lambda polls the message, and starts processing the message.
However, before the first lambda can finish processing the message, the visibility timeout runs out on SQS, and SQS returns the message to the
poll, causing another Lambda node to process that same message.
By increasing the visibility timeout, it should prevent SQS from returning a message back to the poll before Lambda can finish processing the
message
upvoted 31 times
5 months, 2 weeks ago
I am confused. If the email has been sent many times already why would they need more time?
I believe SQS Queue Fifo will keep in order and any duplicates with same ID will be deleted. Can you tell me where i am going wrong? Thanks
upvoted 3 times
2 months, 1 week ago
Increasing the visibility timeout would give time to the lambda function to finish processing the message, which would make it disappear
from the queue, and therefore only one email would be send to the user.
If the visibility timeout ends while the lambda function is still processing the message, the message will be returned to the queue and there
another lambda function would pick it up and process it again, which would result in the user receiving two or more emails about the same
thing.
upvoted 3 times
5 months, 1 week ago
I tend to agree with you. See my comments above.
upvoted 1 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
this is important part:
Immediately after a message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS
sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The
default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
upvoted 12 times
Most Recent
2 days ago
Answer is B.
B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.
By changing the SQS standard queue to an SQS FIFO (First-In-First-Out) queue, you can ensure that messages are processed in the order they are
received and that each message is processed only once. FIFO queues provide exactly-once processing and eliminate duplicates.
Using the message deduplication ID feature of SQS FIFO queues, you can assign a unique identifier (such as the S3 object key) to each message.
Community vote distribution
C (80%)
12%
7%
SQS will check the deduplication ID of incoming messages and discard duplicate messages with the same deduplication ID. This ensures that only
unique messages are processed by the Lambda function.
This solution requires minimal operational overhead as it mainly involves changing the queue type and using the deduplication ID feature, without
requiring modifications to the Lambda function or adjusting timeouts.
upvoted 2 times
6 days, 15 hours ago
Selected Answer: C
A. Long polling doesn't directly address the issue of multiple invocations of the Lambda for the same message. Increasing the ReceiveMessage may
not completely prevent duplicate invocations.
B. Changing the queue type from standard to FIFO requires additional considerations and changes to the application architecture. It may involve
modifying the event configuration and handling message deduplication IDs, which can introduce operational overhead.
D. Deleting messages immediately after reading them may lead to message loss if the Lambda encounters an error or fails to process the image
successfully. It does not guarantee message processing and can result in data loss.
C. By setting the visibility timeout to a value greater than the total time required for the Lambda to process the image and send the email, you
ensure that the message is not made visible to other consumers during processing. This prevents duplicate invocations of the Lambda for the same
message.
upvoted 2 times
1 month, 1 week ago
FIFO - IS A SOLUTION BUT REQUIRES OPERATIONAL OVERHEAD.
INCREASING VISIBILITY TIMEOUT - REQUIRES FAR LESS OPERATIONAL OVERHEAD.
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
I go for option C.
upvoted 1 times
1 month, 3 weeks ago
SQS VISIBILITY TIMEOUT can help preventing the reprocessing of the message from the queue. By default the timeout is 30 secs, min 0 and max is
12 hours.
upvoted 1 times
2 months ago
Selected Answer: C
ccccccc
upvoted 1 times
2 months, 1 week ago
Apologies, I meant A is wrong
upvoted 1 times
2 months, 1 week ago
C is wrong:
'When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20
seconds.'
upvoted 1 times
2 months, 1 week ago
I took the exam in April 2023, out of 65 Questions only 25-30 were from the dumps. I got passed (820!) but these dumps are singly not reliable. I
thing is sure, going through these questions and working itself for the answers will help to pass the actual exam.
upvoted 2 times
1 month, 4 weeks ago
Hola KUl91, cual otra fuente de test de estudio utilizaste?
upvoted 1 times
3 months ago
Selected Answer: C
Key is minimal operational overhead.
upvoted 1 times
3 months ago
Selected Answer: B
The only options that can rule out duplicated messages is B) as per doc "Unlike standard queues, FIFO queues don't introduce duplicate messages.
FIFO queues help you avoid sending duplicates to a queue."
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-exactly-once-processing.html
Answer C, though with less ops overheads, doesn't guarantee to rule out the event to send multiple emails related to the same image. This will
avoid (minimise) processing the same message two or more times, however do not solve the problem of duplicated messages.
upvoted 4 times
3 months, 2 weeks ago
C In an application under heavy load or with spiky traffic patterns, it’s recommended that you:
Set the queue’s visibility timeout to at least six times the function timeout value. This allows the function time to process each batch of records if
the function execution is throttled while processing a previous batch.https://docs.aws.amazon.com/lambda/latest/operatorguide/sqs-retries.html
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: C
To address the issue of users receiving multiple email messages for every uploaded image with the least operational overhead, increasing the
visibility timeout in the SQS queue is the best solution. This requires no additional configuration and thus has the least operational overhead
compared to other options. However, this solution does not completely prevent duplicates, so there is still a possibility of duplicate emails being
sent. While using a FIFO queue can prevent duplicates, it requires additional configuration and therefore may have higher operational overhead.
upvoted 2 times
4 months ago
Here https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html it says this for sqs standard. For
standard queues, the visibility timeout isn't a guarantee against receiving a message twice. For more information, see At-least-once delivery.
upvoted 1 times
5 months, 1 week ago
the only thing that addresses deduplication is using a FIFO queue OR by coding idempotency into your code. Increasing the visibility timeout only
means you can delete the message you were processing, it doesn't handle the duplicates and therefore doesn't answer the question of
"What should the solutions architect do to resolve this issue "
upvoted 1 times
2 months, 1 week ago
I believe this is more about preventing duplicates from happening than it is with what to do with duplicates if they happen.
upvoted 1 times
5 months ago
the case is not about dups on the queue, but invoking the lambda function many times
upvoted 2 times
Topic 1
Question #99
A company is implementing a shared storage solution for a gaming application that is hosted in an on-premises data center. The company needs
the ability to use Lustre clients to access data. The solution must be fully managed.
Which solution meets these requirements?
A. Create an AWS Storage Gateway le gateway. Create a le share that uses the required client protocol. Connect the application server to the
le share.
B. Create an Amazon EC2 Windows instance. Install and con gure a Windows le share role on the instance. Connect the application server to
the le share.
C. Create an Amazon Elastic File System (Amazon EFS) le system, and con gure it to support Lustre. Attach the le system to the origin
server. Connect the application server to the le system.
D. Create an Amazon FSx for Lustre le system. Attach the le system to the origin server. Connect the application server to the le system.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
Answer is D.
Lustre in the question is only available as FSx
https://aws.amazon.com/fsx/lustre/
upvoted 22 times
Highly Voted
6 months, 1 week ago
Selected Answer: D
Option D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Amazon FSx for Lustre is a fully managed file system that is designed for high-performance workloads, such as gaming applications. It provides a
high-performance, scalable, and fully managed file system that is optimized for Lustre clients, and it is fully integrated with Amazon EC2. It is the
only option that meets the requirements of being fully managed and able to support Lustre clients.
upvoted 9 times
Most Recent
6 days, 15 hours ago
Selected Answer: D
A. Lustre client access is not supported by AWS Storage Gateway file gateway.
B. Creating a Windows file share on an EC2 Windows instance is suitable for Windows-based file sharing, but it does not provide the required
Lustre client access. Lustre is a high-performance parallel file system primarily used in high-performance computing (HPC) environments.
C. EFS does not natively support Lustre client access. Although EFS is a managed file storage service, it is designed for general-purpose file storage
and is not optimized for Lustre workloads.
D. Amazon FSx for Lustre is a fully managed file system optimized for high-performance computing workloads, including Lustre clients. It provides
the ability to use Lustre clients to access data in a managed and scalable manner. By choosing this option, the company can benefit from the
performance and manageability of Amazon FSx for Lustre while meeting the requirement of Lustre client access.
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: D
https://aws.amazon.com/fsx/lustre/?
nc1=h_ls#:~:text=Amazon%20FSx%20for%20Lustre%20provides%20fully%20managed%20shared%20storage%20with%20the%20scalability%20an
d%20performance%20of%20the%20popular%20Lustre%20file%20system.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Option D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
BUT the onprem server couldn't view and have good perf with the EFS, so the question is an absurd !
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
Community vote distribution
D (100%)
seriously? it spells out "Lustre" for you
upvoted 1 times
4 months, 3 weeks ago
D is the most logical solution. But still the app is OnPrem so AWS Fx for Lustre is not enough to connect the storage to the app, we'll need a File
Gateway to use with the FSx Lustre
upvoted 2 times
4 months, 4 weeks ago
D is correct
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
Topic 1
Question #100
A company's containerized application runs on an Amazon EC2 instance. The application needs to download security certi cates before it can
communicate with other business applications. The company wants a highly secure solution to encrypt and decrypt the certi cates in near real
time. The solution also needs to store data in highly available storage after the data is encrypted.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create AWS Secrets Manager secrets for encrypted certi cates. Manually update the certi cates as needed. Control access to the data by
using ne-grained IAM access.
B. Create an AWS Lambda function that uses the Python cryptography library to receive and perform encryption operations. Store the function
in an Amazon S3 bucket.
C. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon S3.
D. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption
operations. Store the encrypted data on Amazon Elastic Block Store (Amazon EBS) volumes.
Correct Answer:
D
Highly Voted
8 months, 2 weeks ago
C makes a better sense. Between C (S3) and D (EBS), S3 is highly available with LEAST operational overhead.
upvoted 25 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
Correct Answer is C: EBS is not highly available
upvoted 16 times
5 months, 2 weeks ago
EBS is Highly Available as it stores in multi AZ and S3 is regional.
upvoted 1 times
5 months ago
EBS also has Multi-AZ capability, but it does not replicate the data across multiple availability zones by default. When Multi-AZ is enabled, it
creates a replica of the EBS volume in a different availability zone and automatically failover to the replica in case of a failure. However, this
requires additional configuration and management. In comparison, Amazon S3 automatically replicates data across multiple availability
zones without any additional configuration. Therefore, storing the data on Amazon S3 provides a simpler and more efficient solution for high
availability.
upvoted 7 times
6 months ago
Per AWS: "Amazon EBS volumes are designed to be highly available, reliable, and durable"
https://aws.amazon.com/ebs/features/
upvoted 2 times
6 months, 1 week ago
Yes it is!
upvoted 1 times
Most Recent
6 days, 15 hours ago
Selected Answer: C
A. Manual - no no no!
B. External (python) library - no no no!
C. yeap.
D. S3 over EBS (see answer C)
upvoted 2 times
1 month, 2 weeks ago
I will go for D, as mentioned in the question ' an EC2 instance' , ' near real-time', 'LEAST operational overhead' all refer to EBS rather than S3.
upvoted 1 times
1 month, 2 weeks ago
Community vote distribution
C (77%)
D (23%)
The correct answer is D...
Using a containerized applications in EC2 mean it's easier to use EBS. S3 require extra work to be done and the question is about Least operational
overhead.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: C
The moment you see storage, think S3. It is default unless there is a very specific requirement where S3 does not fit which will be explicitly
described in the question
upvoted 2 times
1 month, 3 weeks ago
C make sense. as its asking for least operational overhead
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: C
A. manual put <> near real time
C. chooses as S3 is highly available
D: only for that EC2
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
To meet the requirements of securely downloading, encrypting, decrypting, and storing certificates with minimal operational overhead, you can use
AWS Key Management Service (KMS) and Amazon S3.
Here's how this solution would work:
Store the security certificates in an S3 bucket with Server-Side Encryption enabled.
Create a KMS Customer Master Key (CMK) for encrypting and decrypting the certificates.
Grant permission to the EC2 instance to access the CMK.
Have the application running on the EC2 instance retrieve the security certificates from the S3 bucket.
Use the KMS API to encrypt and decrypt the certificates as needed.
Store the encrypted certificates in another S3 bucket with Server-Side Encryption enabled.
This solution provides a highly secure way to encrypt and decrypt certificates and store them in highly available storage with minimal operational
overhead. AWS KMS handles the encryption and decryption of data, while S3 provides highly available storage for the encrypted data. The only
operational overhead involved is setting up the KMS CMK and S3 buckets, which is a one-time setup task.
upvoted 2 times
3 months ago
C: S3 is hight available
upvoted 1 times
5 months, 2 weeks ago
Ans is C:
Security certificates are just normal files. it is not SSL certificate etc… confusing !!!!!!!
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
Is this the real question from Exam? It is typically vague. Usually S3 would be chosen when the situation mentioned "high availability". But AWS
official website states that EBS volume has 99.999% availability.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
EBS volumes are in one AZ and S3 buckets are a global resource.
Amazon EBS volumes are designed to be highly available, reliable, and durable. At no additional charge to you, Amazon EBS volume data is
replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component.
upvoted 2 times
5 months, 2 weeks ago
On 2nd thought, I'll change my answer to C
upvoted 4 times
4 months, 3 weeks ago
That was a hilarious change
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: D
Users cannot terminate an EC2 instance in the us-east-1 Region
upvoted 1 times
5 months, 3 weeks ago
LEAST operational - S3
upvoted 1 times
6 months ago
Correct answer is C,
Least operational overhead is S3
Amazon S3 provides durability by redundantly storing the data across multiple Availability Zones whereas EBS provides durability by redundantly
storing the data in a single Availability Zone.
Both S3 and EBS gives the availability of 99.99%, but the only difference that occurs is that S3 is accessed via the internet using API’s and EBS is
accessed by the single instance attached to EBS.
upvoted 3 times
6 months ago
Selected Answer: C
Well, they said Highly available. S3 is HA by default, EBS you need to ensure it's HA.
upvoted 2 times
Topic 1
Question #101
A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet
and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the
public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.
What should the solutions architect do to enable Internet access for the private subnets?
A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC tra c to
the NAT gateway in its AZ.
B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC tra c
to the NAT instance in its AZ.
C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC tra c
to the private internet gateway.
D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC
tra c to the egress-only Internet gateway.
Correct Answer:
A
Highly Voted
7 months, 3 weeks ago
Selected Answer: A
NAT Instances - OUTDATED BUT CAN STILL APPEAR IN THE EXAM!
However, given that A provides the newer option of NAT Gateway, then A is the correct answer.
B would be correct if NAT Gateway wasn't an option.
upvoted 9 times
2 months, 1 week ago
NAT instance or NAT Gateway always created in public subnet to provide internet access to private subnet. In option B. they are creating NAT
Instance in private subnet which is not correct.
upvoted 6 times
Most Recent
6 days, 15 hours ago
By creating a NAT gateway in each public subnet, the private subnets can route their Internet-bound traffic through the NAT gateways. This allows
EC2 in the private subnets to download software updates and access other resources on the Internet.
Additionally, a separate private route table should be created for each AZ. The private route tables should have a default route that forwards non-
VPC traffic (0.0.0.0/0) to the corresponding NAT gateway in the same AZ. This ensures that the private subnets use the appropriate NAT gateway
for Internet access.
B is incorrect because NAT instances require manual management and configuration compared to NAT gateways, which are a fully managed
service. NAT instances are also being deprecated in favor of NAT gateways.
C is incorrect because creating a second internet gateway on a private subnet is not a valid solution. Internet gateways are associated with public
subnets and cannot be directly associated with private subnets.
D is incorrect because egress-only internet gateways are used for IPv6 traffic.
upvoted 2 times
1 month ago
NAT Gateway will be created Public Subnet and Provide access to Private Subnet
upvoted 1 times
2 months ago
Selected Answer: A
A is correct.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
Now NAT Instances is avoided by AWS. Then choose the NAT Gateway
upvoted 2 times
Community vote distribution
A (95%)
5%
3 months ago
A: NAT Gateway
upvoted 1 times
3 months ago
Selected Answer: A
NAT Gateway - AWS-managed NAT, higher bandwidth, high availability, no administration
upvoted 1 times
3 months, 4 weeks ago
You should create 3 NAT gateways, but not in the public subnet. So, even NAT instance is already deprecated, is the right answer in this case, since
it's relate to create in a private subnet, not public.
upvoted 2 times
4 months ago
Refer:
https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html#public-nat-gateway-overview
Should be A.
upvoted 1 times
5 months, 3 weeks ago
aaaaaa
upvoted 1 times
6 months ago
Selected Answer: A
Networking 101, A is only right option
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
The correct answer is option A.
To enable Internet access for the private subnets, the solutions architect should create three NAT gateways, one for each public subnet in each
Availability Zone (AZ). NAT gateways allow private instances to initiate outbound traffic to the Internet but do not allow inbound traffic from the
Internet to reach the private instances.
The solutions architect should then create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. This will
allow instances in the private subnets to access the Internet through the NAT gateways in the public subnets.
upvoted 4 times
6 months, 1 week ago
Option A
NAT gateway needs to be configured within each VPC's in Public Subnet.
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
Should be B
upvoted 1 times
7 months, 3 weeks ago
https://www.examtopics.com/discussions/amazon/view/35679-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
8 months, 1 week ago
B should be the answer. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
upvoted 1 times
7 months, 1 week ago
Sir, you didn't even read the link you posted !! There it is clearly stated that when you need access to Internet from a private subnet you place
the NAT gateway in a PUBLIC subnet.
upvoted 6 times
7 months, 3 weeks ago
B is NAT Instances, which is outdated. The link you provided refers to NAT Gateways (the newer approach) - which means, A is the right answer.
upvoted 2 times
8 months, 1 week ago
Selected Answer: A
aaaaaaa
upvoted 3 times
Topic 1
Question #102
A company wants to migrate an on-premises data center to AWS. The data center hosts an SFTP server that stores its data on an NFS-based le
system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an
Amazon Elastic File System (Amazon EFS) le system.
Which combination of steps should a solutions architect take to automate this task? (Choose two.)
A. Launch the EC2 instance into the same Availability Zone as the EFS le system.
B. Install an AWS DataSync agent in the on-premises data center.
C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.
D. Manually use an operating system copy command to push the data to the EC2 instance.
E. Use AWS DataSync to create a suitable location con guration for the on-premises SFTP server.
Correct Answer:
AB
Highly Voted
8 months, 1 week ago
Selected Answer: AB
**A**. Launch the EC2 instance into the same Availability Zone as the EFS file system.
Makes sense to have the instance in the same AZ the EFS storage is.
**B**. Install an AWS DataSync agent in the on-premises data center.
The DataSync with move the data to the EFS, which already uses the EC2 instance (see the info provided). No more things are required...
C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.
This secondary EBS volume isn't required... the data should be move on to EFS...
D. Manually use an operating system copy command to push the data to the EC2 instance.
Potentially possible (instead of A), BUT the "automate this task" premise goes against any "manually" action. So, we should keep A.
E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.
I don't get the relationship between DataSync and the configuration for SFTP "on-prem"! Nonsense.
So, anwers are A&B
upvoted 32 times
4 months, 1 week ago
CORRECT ANSWER: B&E
Steps 4 &5
https://aws.amazon.com/datasync/getting-started/?nc1=h_ls
upvoted 9 times
6 months, 2 weeks ago
will A,B work without E?
upvoted 3 times
7 months, 1 week ago
Can someone explain why A is correct?
EFS is spread across Availability Zones in a region, as per https://aws.amazon.com/blogs/gametech/gearbox-entertainment-goes-remote-with-
aws-and-perforce/
My question then is whether it makes sense to launch EC2 instances in the *same Availability Zone as the EFS file system* ?
upvoted 3 times
3 months, 1 week ago
However, launching the EC2 instance in the same AZ as the EFS file system can provide some performance benefits, such as reduced network
latency and improved throughput. Therefore, it may be a best practice to launch the EC2 instance in the same AZ as the EFS file system if
performance is a concern.
upvoted 1 times
5 months, 1 week ago
Yes exactly, that's why A doesn't make sense. I voted for B and E.
upvoted 3 times
7 months, 1 week ago
E is correct
https://aws.amazon.com/blogs/storage/migrating-storage-with-aws-datasync/
upvoted 3 times
Highly Voted
6 months, 1 week ago
Selected Answer: BE
Community vote distribution
BE (54%)
AB (44%)
Answer and HOW-TO
B. Install an AWS DataSync agent in the on-premises data center.
E. Use AWS DataSync to create a suitable location configuration for the on-premises SFTP server.
To automate the process of transferring the data from the on-premises SFTP server to an EC2 instance with an EFS file system, you can use AWS
DataSync. AWS DataSync is a fully managed data transfer service that simplifies, automates, and accelerates transferring data between on-premises
storage systems and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.
To use AWS DataSync for this task, you should first install an AWS DataSync agent in the on-premises data center. This agent is a lightweight
software application that you install on your on-premises data source. The agent communicates with the AWS DataSync service to transfer data
between the data source and target locations.
upvoted 18 times
6 months, 1 week ago
Next, you should use AWS DataSync to create a suitable location configuration for the on-premises SFTP server. A location represents a data
source or a data destination in an AWS DataSync task. You can create a location for the on-premises SFTP server by specifying the IP address,
the path to the data, and the necessary credentials to access the data.
Once you have created the location configuration for the on-premises SFTP server, you can use AWS DataSync to transfer the data to the EC2
instance with the EFS file system. AWS DataSync handles the data transfer process automatically and efficiently, transferring the data at high
speeds and minimizing downtime.
upvoted 8 times
6 months, 1 week ago
Explanation of other options
A. Launch the EC2 instance into the same Availability Zone as the EFS file system.
This option is not wrong, but it is not directly related to automating the process of transferring the data from the on-premises SFTP server to
the EC2 instance with the EFS file system. Launching the EC2 instance into the same Availability Zone as the EFS file system can improve the
performance and reliability of the file system, as it reduces the latency between the EC2 instance and the file system. However, it is not
necessary for automating the data transfer process.
upvoted 5 times
6 months, 1 week ago
C. Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance for the data.
This option is incorrect because Amazon EBS is a block-level storage service that is designed for use with Amazon EC2 instances. It is not
suitable for storing large amounts of data that need to be accessed by multiple EC2 instances, like in the case of the NFS-based file
system on the on-premises SFTP server. Instead, you should use Amazon EFS, which is a fully managed, scalable, and distributed file
system that can be accessed by multiple EC2 instances concurrently.
upvoted 3 times
6 months, 1 week ago
D. Manually use an operating system copy command to push the data to the EC2 instance.
This option is not wrong, but it is not the most efficient or automated way to transfer the data from the on-premises SFTP server to
the EC2 instance with the EFS file system. Manually transferring the data using an operating system copy command would require
manual intervention and would not scale well for large amounts of data. It would also not provide the same level of performance and
reliability as a fully managed service like AWS DataSync.
upvoted 3 times
Most Recent
6 days, 14 hours ago
Selected Answer: BE
B. By installing an AWS DataSync agent in the on-premises data center, the architect can establish a secure connection between the on-premises
environment and AWS.
E. Once the DataSync agent is installed, the solutions architect should configure it to create a suitable location configuration that specifies the
source location as the on-premises SFTP server and the target location as the EFS. AWS DataSync will handle the secure and efficient transfer of the
data from the on-premises server to the EC2 using EFS.
A. Launching EC2 into the same AZ as the EFS is not directly related to automating the migration task.
C. Creating a secondary EBS on the EC2 for the data is not necessary when using EFS. EFS provides a scalable, fully managed NFS-based file system
that can be mounted directly on the EC2, eliminating the need for separate EBS.
D. It would require manual intervention and could be error-prone, especially for large amounts of data.
upvoted 2 times
1 week, 3 days ago
Efs is launched in same region so.answer is option AB
upvoted 1 times
2 weeks, 6 days ago
Selected Answer: BE
B: DataSync to copy the data automatically
E: DataSync discovery job to identify how / where to store your data automatically
https://docs.aws.amazon.com/datasync/latest/userguide/getting-started-discovery-job.html
upvoted 1 times
4 weeks ago
Selected Answer: BE
A is irrelevant given the scenario.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BE
Datasync configuration are required
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BE
A : same AZ? why?
upvoted 1 times
2 months ago
Selected Answer: BE
B* To access your self-managed on-premises or cloud storage, you need an AWS DataSync agent that's associated with your AWS account.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-agent.html
E* A location is a storage system or service that AWS DataSync reads from or writes to. Each DataSync transfer has a source and destination
location.
https://docs.aws.amazon.com/datasync/latest/userguide/configure-agent.html
upvoted 1 times
2 months, 1 week ago
Selected Answer: BE
needs to install and provide a location so BE
upvoted 1 times
2 months, 1 week ago
Selected Answer: AB
A needs instance
B needs Datasync
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: AB
I chose AB and Chat GPT said AB
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: BE
vote for BE
upvoted 2 times
2 months, 2 weeks ago
AE is correct
A. It is recommended to launch the EC2 in the same AZ as EFS file system to avoid any data transfer charges between different AZs
E. Using DataSync the data from on-prem can be directly transferred to the EFS file system in an automated manner without datasync agent
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: AB
A must be choosen, cos' it said that the server must be on EC2. Being in the same AZ help performance (no trans between AZ's)
B will sync the onprem volume with the EBS in aws.
After some days you can switch it off.
D) manual opt ? ==> nooooo
E) "create location configuration" ???, that does not exists !
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: BE
I changed my response to B,E. https://docs.aws.amazon.com/datasync/latest/userguide/working-with-locations.html
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: AB
A. Launching the EC2 instance into the same Availability Zone as the EFS file system ensures that the instance can access the EFS file system. This
reduces latency and helps improve application performance.
B. Installing an AWS DataSync agent in the on-premises data center helps automate the migration process by enabling the agent to transfer the
data directly to the Amazon EFS file system. DataSync can perform incremental transfers of data and ensure data integrity.
upvoted 1 times
Topic 1
Question #103
A company has an AWS Glue extract, transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an
Amazon S3 bucket. New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during
each run.
What should the solutions architect do to prevent AWS Glue from reprocessing old data?
A. Edit the job to use job bookmarks.
B. Edit the job to delete data after the data is processed.
C. Edit the job by setting the NumberOfWorkers eld to 1.
D. Use a FindMatches machine learning (ML) transform.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
This is the purpose of bookmarks: "AWS Glue tracks data that has already been processed during a previous run of an ETL job by persisting state
information from the job run. This persisted state information is called a job bookmark. Job bookmarks help AWS Glue maintain state information
and prevent the reprocessing of old data."
https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
upvoted 28 times
Most Recent
6 days, 14 hours ago
Selected Answer: A
A. Job bookmarks in Glue allow you to track the last-processed data in a job. By enabling job bookmarks, Glue keeps track of the processed data
and automatically resumes processing from where it left off in subsequent job runs.
B. Results in the permanent removal of the data from the S3, making it unavailable for future job runs. This is not desirable if the data needs to be
retained or used for subsequent analysis.
C.It would only affect the parallelism of the job but would not address the issue of reprocessing old data. It does not provide a mechanism to track
the processed data or skip already processed data.
D. It is not directly related to preventing Glue from reprocessing old data. The FindMatches transform is used for identifying and matching
duplicate or matching records in a dataset. While it can be used in data processing pipelines, it does not address the specific requirement of
avoiding reprocessing old data in this scenario.
upvoted 2 times
2 months ago
Selected Answer: A
Job bookmark to make sure that the glue job will not process already processed files.
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
Job bookmarks are used in AWS Glue ETL jobs to keep track of the data that has already been processed in a previous job run. With bookmarks
enabled, AWS Glue will read the bookmark information from the previous job run and will only process the new data that has been added to the
data source since the last job run. This saves time and reduces costs by eliminating the need to reprocess old data.
Therefore, a solutions architect should edit the AWS Glue ETL job to use job bookmarks so that it will only process new data added to the S3
bucket since the last job run.
upvoted 2 times
2 months, 4 weeks ago
Selected Answer: A
Job bookmarks enable AWS Glue to track the data that has been processed in a previous run of the job. With job bookmarks enabled, AWS Glue
will only process new data that has been added to the S3 bucket since the previous run of the job, rather than reprocessing all data every time the
job runs.
upvoted 2 times
5 months, 3 weeks ago
Delete files in S3 freely is not good. so B is not correct,
upvoted 1 times
Community vote distribution
A (100%)
6 months ago
Selected Answer: A
A is correct
upvoted 1 times
6 months ago
Selected Answer: A
Option A. Edit the job to use job bookmarks.
Job bookmarks in AWS Glue allow the ETL job to track the data that has been processed and to skip data that has already been processed. This can
prevent AWS Glue from reprocessing old data and can improve the performance of the ETL job by only processing new data. To use job
bookmarks, the solutions architect can edit the job and set the "Use job bookmark" option to "True". The ETL job will then use the job bookmark to
track the data that has been processed and skip data that has already been processed in subsequent runs.
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
It's obviously A. Bookmarks serve this purpose
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 2 times
8 months, 1 week ago
Selected Answer: A
A
https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
upvoted 3 times
Topic 1
Question #104
A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on
Amazon EC2 instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from
thousands of IP addresses. Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Choose two.)
A. Use AWS Shield Advanced to stop the DDoS attack.
B. Con gure Amazon GuardDuty to automatically block the attackers.
C. Con gure the website to use Amazon CloudFront for both static and dynamic content.
D. Use an AWS Lambda function to automatically add attacker IP addresses to VPC network ACLs.
E. Use EC2 Spot Instances in an Auto Scaling group with a target tracking scaling policy that is set to 80% CPU utilization.
Correct Answer:
AC
Highly Voted
8 months, 2 weeks ago
Selected Answer: AC
I think it is AC, reason is they require a solution that is highly available. AWS Shield can handle the DDoS attacks. To make the solution HA you can
use cloud front. AC seems to be the best answer imo.
AB seem like redundant answers. How do those answers make the solution HA?
upvoted 19 times
7 months, 1 week ago
A - AWS Shield Advanced
C - (protecting this option) IMO: AWS Shield Advanced has to be attached. But it can not be attached directly to EC2 instances.
According to the docs: https://aws.amazon.com/shield/
It requires to be attached to services such as CloudFront, Route 53, Global Accelerator, ELB or (in the most direct way using) Elastic IP (attached
to the EC2 instance)
upvoted 15 times
Highly Voted
6 months ago
Selected Answer: AC
Option A. Use AWS Shield Advanced to stop the DDoS attack.
It provides always-on protection for Amazon EC2 instances, Elastic Load Balancers, and Amazon Route 53 resources. By using AWS Shield
Advanced, the solutions architect can help protect the website from large-scale DDoS attacks.
Option C. Configure the website to use Amazon CloudFront for both static and dynamic content.
CloudFront is a content delivery network (CDN) that integrates with other Amazon Web Services products, such as Amazon S3 and Amazon EC2, to
deliver content to users with low latency and high data transfer speeds. By using CloudFront, the solutions architect can distribute the website's
content across multiple edge locations, which can help absorb the impact of a DDoS attack and reduce the risk of downtime for the website.
upvoted 6 times
Most Recent
6 days, 14 hours ago
Selected Answer: AC
A. AWS Shield Advanced provides advanced DDoS protection for AWS resources, including EC2. It includes features such as real-time threat
intelligence, automatic protection, and DDoS cost protection.
C. CloudFront is a CDN service that can help mitigate DDoS attacks. By routing traffic through CloudFront, requests to the website are distributed
across multiple edge locations, which can absorb and mitigate DDoS attacks more effectively. CloudFront also provides additional DDoS protection
features, such as rate limiting, SSL/TLS termination, and custom security policies.
B. While GuardDuty can detect and provide insights into potential malicious activity, it is not specifically designed for DDoS mitigation.
D. Network ACLs are not designed to handle high-volume traffic or DDoS attacks efficiently.
E. Spot Instances are a cost optimization strategy and may not provide the necessary availability and protection against DDoS attacks compared to
using dedicated instances with DDoS protection mechanisms like Shield Advanced and CloudFront.
upvoted 2 times
2 months, 1 week ago
Selected Answer: AC
Key word:
DDoS attack will choose the AWS Shield Advanced
Community vote distribution
AC (90%)
10%
Cloudfront have attached the WAF
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: AC
A & C
but no fully understand why cloudfront is opted.
The customer does not need it, and it's not exactly cheap.
Yes it could serve the cached content to the attacker, alighting the job in backend, but as I said it's not cheap, and the OOTB AWS Shield is free and
can cope with the attack (as far as it won't be waf-style-attack).
upvoted 1 times
4 months, 1 week ago
Selected Answer: AC
DDos is better with shield and Cloudfront also provide protection for ddos
upvoted 1 times
6 months ago
AC
"AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations worldwide.
You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your application. Your origin
servers can be Amazon Simple Storage Service (S3), Amazon EC2, Elastic Load Balancing, or a custom server outside of AWS."
https://aws.amazon.com/shield/faqs/
upvoted 1 times
6 months, 1 week ago
A and C as your will need to configure Cloudfront to activate AWS Advance Shield
upvoted 1 times
6 months, 2 weeks ago
AC, AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations worldwide
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: AC
c not b. b is wrong because it's not malicious activity, just annoying activity
upvoted 1 times
7 months ago
Selected Answer: AC
I thought it was AB. But after I read the docs, I vote for AC.
Amazon GuardDuty is a threat detection service, it can NOT take action directly, it needs to work with Lambda.
upvoted 1 times
7 months, 1 week ago
A and C
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: AC
AWS Shield can handle the DDoS attacks
Amazon CloudFront supports DDoS protection, integration with Shield, AWS Web Application Firewall
upvoted 3 times
7 months, 3 weeks ago
Selected Answer: AC
correct
upvoted 1 times
7 months, 3 weeks ago
I believe it's A & E ; the questions speaks to two things.
1. That can mitigate large DDOS attack - (Ans A )
2. A solutions architect must design a highly available infrastructure for a website; Downtime is not acceptable ( Ans E)
So Ans is AE
I guess we focus only on the DDOS attack aspect of the question
upvoted 2 times
6 months, 1 week ago
You need extra overhead to set up for E option. Target Tracking doesn't happen automatically when Auto Scaling is set up
upvoted 1 times
7 months ago
So, spot instances mean HA for you?
upvoted 2 times
7 months, 1 week ago
spot instances aren't always going to be highly available enough for certain situations. its AC
upvoted 1 times
7 months, 4 weeks ago
Selected Answer: AB
Amazon GuardDuty has Threat response and remediation automation.
upvoted 1 times
6 months ago
No, GuardDuty's role is detect. not block.
upvoted 1 times
8 months ago
A : handle DDoS
E: Use EC2 Spot Instances in an Auto Scaling group with a target tracking scaling policy that is set to 80% CPU utilization.
upvoted 1 times
7 months ago
spot instance are not reliable, they are for worlds which can tolerate downtime. So the Answer should be A & C
upvoted 2 times
7 months ago
*workloads
upvoted 1 times
Topic 1
Question #105
A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to con gure
permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?
A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
B. Add an execution role to the function with lambda:InvokeFunction as the action and Service: lambda.amazonaws.com as the principal.
C. Add a resource-based policy to the function with lambda:* as the action and Service: events.amazonaws.com as the principal.
D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service: events.amazonaws.com as the
principal.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
Best way to check it... The question is taken from the example shown here in the documentation:
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-resource-based.html#eb-lambda-permissions
upvoted 22 times
Highly Voted
6 months, 1 week ago
Selected Answer: D
The correct solution is D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:
events.amazonaws.com as the principal.
The principle of least privilege requires that permissions are granted only to the minimum necessary to perform a task. In this case, the Lambda
function needs to be able to be invoked by Amazon EventBridge (Amazon CloudWatch Events). To meet these requirements, you can add a
resource-based policy to the function that allows the InvokeFunction action to be performed by the Service: events.amazonaws.com principal. This
will allow Amazon EventBridge to invoke the function, but will not grant any additional permissions to the function.
upvoted 10 times
6 months, 1 week ago
Why other options are wrong
Option A is incorrect because it grants the lambda:InvokeFunction action to any principal (*), which would allow any entity to invoke the
function and goes beyond the minimum permissions needed.
Option B is incorrect because it grants the lambda:InvokeFunction action to the Service: lambda.amazonaws.com principal, which would allow
any Lambda function to invoke the function and goes beyond the minimum permissions needed.
Option C is incorrect because it grants the lambda:* action to the Service: events.amazonaws.com principal, which would allow Amazon
EventBridge to perform any action on the function and goes beyond the minimum permissions needed.
upvoted 8 times
Most Recent
6 days, 13 hours ago
Selected Answer: D
In this solution, a resource-based policy is added to the Lambda function, which allows the specified principal (events.amazonaws.com) to invoke
the function. The lambda:InvokeFunction action provides the necessary permission for the Amazon EventBridge rule to trigger the Lambda
function.
Option A is incorrect because it assigns the lambda:InvokeFunction action to all principals (*), which grants permission to invoke the function to any
entity, which is broader than necessary.
Option B is incorrect because it assigns the lambda:InvokeFunction action to the specific principal "lambda.amazonaws.com," which is the service
principal for AWS Lambda. However, the requirement is for the EventBridge service principal to invoke the function.
Option C is incorrect because it assigns the lambda:* action to the specific principal "events.amazonaws.com," which is the service principal for
Amazon EventBridge. However, it grants broader permissions than necessary, allowing any Lambda function action, not just
lambda:InvokeFunction.
upvoted 2 times
1 month ago
Option C is incorrect, the reason is that, firstly, lambda:* allows Amazon EventBridge to perform any action on the function and this is beyond the
minimum permissions needed.
upvoted 1 times
Community vote distribution
D (100%)
1 month, 3 weeks ago
Since its for Lamda which is a resource, resource policy is the trick
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html#lambda-permissions
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
The definition scope of D is the smallest, so is it
upvoted 1 times
6 months ago
Selected Answer: D
events.amazonaws.com is principal for eventbridge
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
least privilege meant the role cannot be "*". answer B only mention lambda. so the answer was D
upvoted 1 times
7 months ago
Selected Answer: D
My answer was D, as this is the most specific answer.
And then there's this guy's answer (123jhl0) which provides more details.
upvoted 1 times
Topic 1
Question #106
A company is preparing to store con dential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key
usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally e cient?
A. Server-side encryption with customer-provided keys (SSE-C)
B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
C. Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
D. Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
The MOST operationally efficient one is D.
Automating the key rotation is the most efficient.
Just to confirm, the A and B options don't allow automate the rotation as explained here:
https://aws.amazon.com/kms/faqs/#:~:text=You%20can%20choose%20to%20have%20AWS%20KMS%20automatically%20rotate%20KMS,KMS%20
custom%20key%20store%20feature
upvoted 14 times
6 months, 1 week ago
In addition you cannot log key usage in B, for A I am not certain
upvoted 1 times
7 months ago
Thank you for the explanation.
upvoted 1 times
Most Recent
6 days, 13 hours ago
Selected Answer: D
SSE-KMS provides a secure and efficient way to encrypt data at rest in S3. SSE-KMS uses KMS to manage the encryption keys securely. With SSE-
KMS, encryption keys can be automatically rotated using KMS key rotation feature, which simplifies the key management process and ensures
compliance with the requirement to rotate keys every year.
Additionally, SSE-KMS provides built-in audit logging for encryption key usage through CloudTrail, which captures API calls related to the
management and usage of KMS keys. This meets the requirement for logging key usage for auditing purposes.
Option A (SSE-C) requires customers to provide their own encryption keys, but it does not provide key rotation or built-in logging of key usage.
Option B (SSE-S3) uses Amazon S3 managed keys for encryption, which simplifies key management but does not provide key rotation or detailed
key usage logging.
Option C (SSE-KMS with manual rotation) uses AWS KMS keys but requires manual rotation, which is less operationally efficient than the automatic
key rotation available with option D.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: D
Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation meets the requirements and is the most operationally efficient
solution. This option allows you to use AWS KMS to automatically rotate the keys every year, which simplifies key management. In addition, key
usage is logged for auditing purposes, and the data is encrypted at rest to meet compliance requirements.
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: B
mazon API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. You can use
API Gateway to create a REST API that exposes the location data as an API endpoint, allowing you to access the data from your analytics platform.
AWS Lambda is a serverless compute service that lets you run code in response to events or HTTP requests. You can use Lambda to write the code
that retrieves the location data from your data store and returns it to API Gateway as a response to API requests. This allows you to scale the API to
handle a large number of requests without the need to provision or manage any infrastructure.
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Community vote distribution
D (92%)
8%
The most operationally efficient solution that meets the requirements listed would be option D: Server-side encryption with AWS KMS keys (SSE-
KMS) with automatic rotation.
SSE-KMS allows you to use keys that are managed by the AWS Key Management Service (KMS) to encrypt your data at rest. KMS is a fully managed
service that makes it easy to create and control the encryption keys used to encrypt your data. With automatic key rotation enabled, KMS will
automatically create a new key for you on a regular basis, typically every year, and use it to encrypt your data. This simplifies the key rotation
process and reduces the operational burden on your team.
In addition, SSE-KMS provides logging of key usage through AWS CloudTrail, which can be used for auditing purposes.
upvoted 1 times
6 months, 1 week ago
Why other options are wrong
Option A: Server-side encryption with customer-provided keys (SSE-C) would require you to manage the encryption keys yourself, which can be
more operationally burdensome.
Option B: Server-side encryption with Amazon S3 managed keys (SSE-S3) does not allow for key rotation or logging of the key usage.
Option C: Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation would require you to manually initiate the key rotation
process, which can be more operationally burdensome compared to automatic rotation.
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
You can choose to have AWS KMS automatically rotate KMS keys every year, provided that those keys were generated within AWS KMS HSMs.
Automatic key rotation is not supported for imported keys, asymmetric keys, or keys generated in a CloudHSM cluster using the AWS KMS custom
key store feature. If you choose to import keys to AWS KMS or asymmetric keys or use a custom key store, you can manually rotate them by
creating a new KMS key and mapping an existing key alias from the old KMS key to the new KMS key.
upvoted 1 times
6 months, 2 weeks ago
Can anybody correct me if I'm wrong, KMS does not offer automatic rotations but SSE-KMS only allows automatic rotation once in 3 years thus if
we want rotation every year we need to rotate it manually?
upvoted 2 times
6 months, 1 week ago
You're wrong :) "All AWS managed keys are automatically rotated every year. You cannot change this rotation schedule."
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: D
Agree Also, SSE-S3 cannot be audited.
upvoted 2 times
Topic 1
Question #107
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company
wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support
this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
A. Use Amazon Athena with Amazon S3.
B. Use Amazon API Gateway with AWS Lambda.
C. Use Amazon QuickSight with Amazon Redshift.
D. Use Amazon API Gateway with Amazon Kinesis Data Analytics.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: B
API Gateway is needed to get the data so option A and C are out.
“The company wants to use these data points in its existing analytics platform” so there is no need to add Kynesis. Option D is also out.
This leaves us with option B as the correct one.
upvoted 53 times
5 months ago
i dont understand the use of a lambda function here, maybe if there would be need to transform the data, can you explain?
upvoted 3 times
5 months, 1 week ago
AWS Lambda is a serverless compute service that can be used to run code in response to specific events, such as changes to data in an Amazon
S3 bucket or updates to a DynamoDB table. It could be used to process the location data, but it doesn't provide storage solution. Therefore, it
would not be the best option for storing and retrieving location data in this scenario.
upvoted 3 times
Highly Voted
8 months ago
Selected Answer: D
I dont understand why you will vote B?
how are you going to store data with just lambda?
> Which action meets these requirements for storing and retrieving location data
In this use case there will obviously be a ton of data and you want to get real-time location data of the bicycles, and to analyze all these info kinesis
is the one that makes most sense here.
upvoted 28 times
4 months, 2 weeks ago
But KDA also cannot store data.
upvoted 2 times
1 week, 3 days ago
kda can store data with retention period
upvoted 1 times
6 months, 1 week ago
Lambda isn't storing the data themselves. It's triggering the data store to the company's "existing data analytics platform"
upvoted 8 times
6 months, 2 weeks ago
Real-time analytics on Kinesis Data Streams & Firehose using SQL, not store db ...
upvoted 3 times
7 months, 4 weeks ago
I vote D because company HAS its analitcs Platform, Why pay?. Kinesis is for analys not for storing. Can you explain? Thanks
upvoted 7 times
7 months, 3 weeks ago
Weird Q as they already have their own data analysis platform
Hopefully i dont see this question in the exam lol
Community vote distribution
B (52%)
D (37%)
11%
upvoted 12 times
2 months ago
I saw this question on the exam and choose D. Lol.
upvoted 2 times
7 months, 4 weeks ago
B Lambda and API
upvoted 2 times
7 months, 3 weeks ago
it can store according to the doc
There is no way to lambda to store data which is part of the requirements
upvoted 1 times
Most Recent
6 days, 13 hours ago
Selected Answer: B
Combination of API Gateway and Lambda provides a scalable and cost-effective solution for handling the REST API and processing the location
data. API Gateway can handle the API request and response management, while Lambda can process and store/retrieve the location data from a
suitable data store like Amazon S3.
Options A, C, and D are not the most suitable options for storing and retrieving location data in this scenario:
Option A suggests using Amazon Athena with Amazon S3, which is a query service for data in S3 but does not provide a direct REST API
integration for real-time location data retrieval.
Option C suggests using Amazon QuickSight with Amazon Redshift, which is more suitable for data analytics and visualization rather than real-time
data retrieval through a REST API.
Option D suggests using Amazon API Gateway with Amazon Kinesis Data Analytics, which is more suitable for real-time streaming analytics rather
than data storage and retrieval for REST APIs.
upvoted 3 times
1 week, 3 days ago
Selected Answer: D
" track the location of its bicycles during peak operating hours"
Amazon Kinesis Data Analytics is the easiest way to transform and analyze streaming data in real time.
upvoted 1 times
3 weeks ago
Selected Answer: D
The option D is correct answer.
upvoted 1 times
1 month ago
Kinesis Data Analytics as the name suggests is for analytics not storing. Lambda isn't storing the data themselves. It's triggering the data store to
the company's "existing data analytics platform"
upvoted 1 times
1 month, 1 week ago
Selected Answer: D
The key words here are: existing analytics platform Kinesis, and The data points must be accessible from the REST API, hence the right answer is
option D.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
I would go for A. as no other answers have showed satisfactory solutions for storing data. Even the question hasn’t been explicit with the size of the
number. Considering it’s a bicycle sharing company, the location data generated everyday can be a huge number.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
Between C and D, the correct one is D.
Kinesis analytics output to S3, RedShift, Elasticsearch and Kinesis Data Streams. Lambda require additional logic to get the data stored somewhere.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: D
It's stuck between B and D but the giveaway is needing to store. KDA can store, Lambda cannot.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
Given API is needed it's B or D
Given an existing analytics platform - what the solution needs to do is basically provide some querying to this existing platform. Therefore B
upvoted 3 times
1 month, 3 weeks ago
Selected Answer: B
Just remember this keyworkd : API GW
upvoted 1 times
2 months ago
Selected Answer: B
Data points in its existing analytics platform + Data points must be accessible from the REST API + Track the location of its bicycles during peak
operating hours
They already have an analytics platform, A (Athena) and D (Kinesis Data Analytics) are out of the race even though S3 & APT Gateway Support REST
API. Now B and C are in Race. C will not support REST API. So the answer should be B
upvoted 3 times
2 months, 1 week ago
Selected Answer: B
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any
scale. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers.
Using Amazon API Gateway with AWS Lambda enables the bicycle sharing company to easily create a REST API that can receive location data and
store it in a suitable data store such as Amazon S3 or Amazon DynamoDB. It also allows them to retrieve the data and feed it into their existing
analytics platform.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: B
Özgür Öztürk on UDEMY has Ayti.tech said that in blog correct answer B
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: D
vote for D
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Perfectly explained by CaoMengde09 :
Let's read again this key sentence : "The company wants to use these data points in its existing analytics platform"
So we have already an existing Analytics Platforms which means here that we should only support the architecture not propose a new analytics
paltform from scratch. So AWS API Gateway and Lambda are more than enough to bring the data to the client's EXISTING ANALYTICS PLATFORM.
Also AWS Kinesis Data Analytics cannot work without already a provisioned AWS Kinesis Data Stream Cluster. So D. Is far from enough to support
the architecture
upvoted 1 times
Topic 1
Question #108
A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs
to be removed from the website and the data must be sent to multiple target systems.
Which design should a solutions architect recommend?
A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) queue for the targets to consume.
B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple
Queue Service (Amazon SQS) FIFO queue for the targets to consume.
C. Subscribe to an RDS event noti cation and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon
Simple Noti cation Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
D. Subscribe to an RDS event noti cation and send an Amazon Simple Noti cation Service (Amazon SNS) topic fanned out to multiple Amazon
Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
Correct Answer:
C
Highly Voted
7 months, 1 week ago
Selected Answer: A
Interesting point that Amazon RDS event notification doesn't support any notification when data inside DB is updated.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.overview.html
So subscription to RDS events doesn't give any value for Fanout = SNS => SQS
B is out because FIFO is not required here.
A is left as correct answer
upvoted 56 times
1 month, 1 week ago
I don't think A is a valid solution ... how do you send the data to multiple targets using a single SQS?
upvoted 2 times
1 month, 3 weeks ago
Wow, great find romko. Didn't realize that Event notification doesn't notify when the data is changed, it notifies when major changes at DB level
occur like settings etc
upvoted 1 times
3 months ago
Listing the Amazon RDS event notification categories.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.ListingCategories.html:
upvoted 1 times
7 months ago
Romko, you are right pal. Nice research.
There is RDS Fanout to SNS, but not specifically for DB level events (write, reads, etc).
It can fan out events at instance level (turn on, restart, update), cluster level (added to cluster, removed from cluster, etc). But not at DB level.
More detailed event list here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html
Correct answer is A.
upvoted 13 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: A
RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot
events. What we need in the scenario is to capture data-modifying events (INSERT, DELETE, UPDATE) which can be achieved thru native functions or
stored procedures.
upvoted 8 times
5 months, 1 week ago
I agree with it requiring a native function or stored procedure, but can they in turn invoke a Lambda function? I have only seen this being
possible with Aurora, but not RDS - and I'm not able to find anything googling for it either. I guess it has to be possible, since there's no other
Community vote distribution
A (66%)
D (34%)
option that fits either.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
5 months, 1 week ago
To add to that though, A also states to only use SQS (no SNS to SQS fan-out), which doesn't seem right as the message needs to go to
multiple targets?
upvoted 4 times
Most Recent
1 day, 18 hours ago
Selected Answer: D
SNS Fan out to multiple SQS
upvoted 1 times
1 week, 3 days ago
Sqs, does not send notification to multiple targets Answer is D
upvoted 1 times
2 weeks, 2 days ago
D: RDS event to SNS notification
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.overview.html
upvoted 1 times
4 weeks, 1 day ago
Option D is the most suitable for this scenario:
AWS Lambda can then process these messages in SQS to update the target systems.
The other options aren't as suitable because:
Option A and B: Lambda functions are not directly triggered by Amazon RDS updates.
Option C: SQS does not fan out to multiple SNS topics. It's the other way around; an SNS topic can fan out to multiple SQS queues.
upvoted 1 times
1 month ago
Answer is D. Fanout pattern is basically from SNS -> to -> multiple SQS.
Option C is wrong - because SQS doesn't support push.
upvoted 1 times
1 month ago
Selected Answer: D
A is WRONG. the question requires message to " multiple target systems", how message in sqs to route to multiple systems, you need sns to fan-
out.
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
RDS event notification subscription doesn't support any notification when data is removed\deleted. SQS FIFO doesn't make sense.
So, A is the most closest answer
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: C
The answer is C
upvoted 1 times
2 months ago
Selected Answer: D
For me "Multiple target systems" and SNS is the key
upvoted 2 times
2 months ago
Selected Answer: A
RDS resources eligible for event subscription
-DB instance
-DB snapshot
-DB parameter group
-DB security group
-RDS Proxy
-Custom engine version
upvoted 2 times
2 months, 2 weeks ago
Correct ans is D
RDS event notification subscription can be set up to trigger an event. This event will an SNS topic, which can fan out to multiple SQS queues to be
processed by Lambda. This design is highly scalable, fault tolerant and ensures the data is delivered to multiple targets. Also, it decouples the
different components of the system to scale independently.
Option A and B are viable but do not leverage SNS which provides fucntionality like fan-out and topic filtering
Option C is not recommended as it involves additional layer of complexity
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
AAAAAAAAAAAAAAAA
upvoted 1 times
2 months, 3 weeks ago
Has to be D - how do you manage which of the multiple systems in option A get which message? multiple queues can subscribe to either specific
topics OR difference application targets can consume from different SQS instances - preferrably both!
So D for me
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
This option provides a clean separation of concerns, where the Lambda function is responsible for sending the updated data to the SQS queue, and
the targets can consume the messages from the queue at their own pace. This can help with scaling and reliability, as the targets can handle
messages independently and can be scaled up or down as needed.
upvoted 1 times
3 months ago
Listing the Amazon RDS event notification categories.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.ListingCategories.html
upvoted 1 times
Topic 1
Question #109
A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded
to Amazon S3 to remain unchangeable for a nonspeci c amount of time until the company decides to modify the objects. Only speci c users in
the company's AWS account can have the ability 10 delete the objects.
What should a solutions architect do to meet these requirements?
A. Create an S3 Glacier vault. Apply a write-once, read-many (WORM) vault lock policy to the objects.
B. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3
bucket’s default retention mode for new objects.
C. Create an S3 bucket. Use AWS CloudTrail to track any S3 API events that modify the objects. Upon noti cation, restore the modi ed objects
from any backup versions that the company has.
D. Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold
permission to the IAM policies of users who need to delete the objects.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
A - No as "specific users can delete"
B - No as "nonspecific amount of time"
C - No as "prevent the data from being change"
D - The answer: "The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a
legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and
remains in effect until removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 21 times
6 months ago
The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold
prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in
effect until removed.
Correct
upvoted 1 times
Highly Voted
8 months, 2 weeks ago
typo -- 10 delete the objects => TO delete the objects
upvoted 12 times
Most Recent
1 month ago
I go with option B as they still need some specific users to be able to make changes so Gov mode is the best choice and 100 yrs is like infinity as
well haha
upvoted 1 times
4 months ago
Selected Answer: D
The correct answer is D.
upvoted 1 times
4 months ago
Selected Answer: D
Option B specifies a retention period of 100 years which contradicts what the question asked for.....
"The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company
decides to modify the objects"
Setting the retention period of 100 years is specific and the company wants new data/objects to remain unchanged for nonspecific amount of time.
Correct answer is D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 3 times
4 months, 1 week ago
Selected Answer: D
Community vote distribution
D (78%)
B (22%)
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents
an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until
removed." https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-legal-hold.html
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: D
retention period of 100 Years prevents the object to be deleted bevor the retention period expires, so it's not a good fit.
upvoted 1 times
5 months, 2 weeks ago
it is B.
Once a legal hold is enabled, regardless of the object's retention date or retention mode, the object version cannot be deleted until the legal hold
is removed.
Question says: "Specific users must have ability to delete objects"
upvoted 4 times
5 months, 3 weeks ago
Selected Answer: D
While S3 bucket governance mode does allow certain users with permissions to alter retention/delete objects, the 100 years in Option B makes it
invalid.
Correct answer is option D.
"With Object Lock you can also place a legal hold on an object version. Like a retention period, a legal hold prevents an object version from being
overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed. "
https://aws.amazon.com/s3/features/object-lock/
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-legal-holds
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
With Object Lock, you can also place a legal hold on an object version. Like a retention period, a legal hold prevents an object version from being
overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed. Legal holds can be
freely placed and removed by any user who has the s3:PutObjectLegalHold permission.
B - No as "nonspecific amount of time" otherwise B will meet the requirement with legal hold attached.
upvoted 1 times
6 months ago
Wouldn't D require s3:GetBucketObjectLockConfiguration IAM permission? If so, D is incomplete and wouldn't meet the requirement.
(from the link shared above)
upvoted 1 times
6 months ago
Selected Answer: B
Correct answer : B
Retention mode - Governance:
• Most users can't overwrite or delete an object version or alter its lock settings
• Some users have special permissions to change the retention or delete the object
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
To meet the requirements specified in the question, the solution architect should choose Option B: Create an S3 bucket with S3 Object Lock
enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3 bucket's default retention mode for new objects.
S3 Object Lock is a feature of Amazon S3 that allows you to apply a retention period to objects in your bucket, during which time the objects
cannot be deleted or overwritten. By enabling versioning on the bucket, you can ensure that all versions of an object are retained, including any
deletions or overwrites. By setting a retention period of 100 years, you can ensure that the objects remain unchangeable for a long time.
By using governance mode as the default retention mode for new objects, you can ensure that the retention period is applied to all new objects
that are uploaded to the bucket. This will prevent the objects from being deleted or overwritten until the retention period expires.
upvoted 2 times
6 months, 1 week ago
Why other options are wrong
Option A (creating an S3 Glacier vault and applying a WORM vault lock policy) would not meet the requirement to prevent the objects from
being changed, because S3 Glacier is a storage class for long-term data archival and does not support read-write operations.
Option C (using CloudTrail to track API events and restoring modified objects from backup versions) would not prevent the objects from being
changed in the first place.
Option D (adding a legal hold and the s3:PutObjectLegalHold permission to IAM policies) would not meet the requirement to prevent the
objects from being changed for a nonspecific amount of time.
upvoted 1 times
6 months, 1 week ago
Legal holds are used to prevent objects that are subject to legal or compliance requirements from being deleted or overwritten, even if their
retention period has expired. While legal holds can be useful for preventing the accidental deletion of important objects, they do not prevent
the objects from being changed. S3 Object Lock can be used to prevent objects from being deleted or overwritten for a specified retention
period, but a legal hold does not provide this capability.
In addition, the s3:PutObjectLegalHold permission allows users to place a legal hold on an object, but it does not prevent the object from
being changed. To prevent the objects from being changed for a nonspecific amount of time, the solution architect should use S3 Object
Lock and set a longer retention period on the objects.
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents
an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until
removed."
upvoted 1 times
6 months, 3 weeks ago
Answer is D, the key here is that no specific retention period was set by the company and this is exactly what differentiates Legal hold from
Governance
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
With Object Lock you can also place a legal hold on an object version. Like a retention period, a legal hold prevents an object version from being
overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed. Legal holds can be
freely placed and removed by any user who has the s3:PutObjectLegalHold permission.
upvoted 1 times
Topic 1
Question #110
A social media company allows users to upload images to its website. The website runs on Amazon EC2 instances. During upload requests, the
website resizes the images to a standard size and stores the resized images in Amazon S3. Users are experiencing slow upload requests to the
website.
The company needs to reduce coupling within the application and improve website performance. A solutions architect must design the most
operationally e cient process for image uploads.
Which combination of actions should the solutions architect take to meet these requirements? (Choose two.)
A. Con gure the application to upload images to S3 Glacier.
B. Con gure the web server to upload the original images to Amazon S3.
C. Con gure the application to upload images directly from each user's browser to Amazon S3 through the use of a presigned URL
D. Con gure S3 Event Noti cations to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize uploaded
images.
Correct Answer:
BD
Highly Voted
6 months, 1 week ago
Selected Answer: CD
To meet the requirements of reducing coupling within the application and improving website performance, the solutions architect should consider
taking the following actions:
C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a pre-signed URL. This will allow
the application to upload images directly to S3 without having to go through the web server, which can reduce the load on the web server and
improve performance.
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image. This will
allow the application to resize images asynchronously, rather than having to do it synchronously during the upload request, which can improve
performance.
upvoted 27 times
2 months, 3 weeks ago
presigned URL is for download the data from S3, not for uploads, so the user does not upload anything. C is no correct.
upvoted 3 times
2 months, 1 week ago
Presigned URL can be use for upload.
upvoted 2 times
2 months ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
upvoted 2 times
1 month, 3 weeks ago
preassigned URL is for upload or download for temporary time and for specific users outside the company
upvoted 1 times
1 month, 3 weeks ago
but for temporary purpose not for permanent
upvoted 1 times
6 months, 1 week ago
Why other options are wrong
Option A, Configuring the application to upload images to S3 Glacier, is not relevant to improving the performance of image uploads.
Option B, Configuring the webserver to upload the original images to Amazon S3, is not a recommended solution as it would not reduce
coupling within the application or improve performance.
Option E, Creating an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize
uploaded images, is not a recommended solution as it would not be able to resize images in a timely manner and would not improve
performance.
upvoted 2 times
Community vote distribution
BD (51%)
CD (47%)
4 months, 2 weeks ago
Here it means to decouple the processes, so that the web server don't have to do the resizing, so it doesn't slow down. The customers access
the web server, so the web server have to be involved in the process, and how the others already wrote, the pre-signed URL is not the right
solution because, of the explanation you can read in the other comments.
And additional! "Configure the application to upload images directly from EACH USER'S BROWSER to Amazon S3 through the use of a pre-
signed URL"
I am not an expert, but I can't imagine that you can store an image that an user uploads in his browser etc.
upvoted 3 times
Highly Voted
3 months, 2 weeks ago
Selected Answer: BD
Why would anyone vote C? signed URL is for temporary access. also, look at the vote here:
https://www.examtopics.com/discussions/amazon/view/82971-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 11 times
Most Recent
1 week, 4 days ago
Selected Answer: BD
Correct answers are BD
upvoted 1 times
1 month ago
BC BC BC
upvoted 1 times
1 month ago
pre-signed URL is not the correct answer as it allows you to grant temporary access to users who don't have permission to directly run AWS
operations in your account.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BD
B D are correct options.
upvoted 3 times
1 month, 3 weeks ago
Selected Answer: BD
C : in AWS skill builder, there's similar question. If you want to choose pre-signed URL, then there should be needs for security to download a
specific or own user.
upvoted 2 times
2 months ago
Selected Answer: BD
I think C doesn't make sense
upvoted 3 times
2 months ago
Selected Answer: CD
Your app can generate pre-signed URLs every time you want to allow an arbitrary party to perform an upload or a download of an S3 object by
simply using the HTTP protocol and without having to know anything about S3 or the AWS authentication protocols.
https://fourtheorem.com/the-illustrated-guide-to-s3-pre-signed-urls/
upvoted 2 times
2 months, 1 week ago
Selected Answer: CD
B is wrong because bucket are private by default there will be explicit statement to switch to public.
So C is the correct one. Using the presinged URL works even with a private/public bucket
upvoted 1 times
2 months, 2 weeks ago
Correct ans is B and C
With B there is no web server needed to resize the images, reducing coupling within the app. This offloads resizing task from the web server to S3,
which can handle taks more efficiently.
With C the upload requests will be faster since the images are uploaded directly to S3. The presigned URL is temp URL that is generated by app
which grants permission to upload a specific object to S3.
Option D is not best solution since it involves lambda which may introduce additional latency in image upload process.
Option A and E are not relevant to the scenario
upvoted 1 times
2 months, 2 weeks ago
ANSWER- CD : Overview of serverless uploading to S3
When you upload directly to an S3 bucket, you must first request a signed URL from the Amazon S3 service. You can then upload directly using the
signed URL. This is two-step process for your application front end(https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-
from-a-web-or-mobile-application/)
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: BD
NOT A: S3 Glacier for archive data
NOT C: presigned URL is for download the data from S3, not for uploads, so the user does not upload anything.
NOT E: this is not a scheduled demand, but a "live" demand.
upvoted 2 times
2 months, 2 weeks ago
presigned URl used while uploading .
upvoted 3 times
3 months ago
Selected Answer: BD
to me : BD
upvoted 3 times
3 months, 1 week ago
you can use a presigned URL to optionally share objects or allow your customers/users to upload objects to buckets without AWS security
credentials or permissions. https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html
upvoted 2 times
3 months, 1 week ago
B + D looks right to me.
upvoted 1 times
3 months, 2 weeks ago
B+D looks correct as creating & using presigned url is not operationally efficient
upvoted 1 times
Topic 1
Question #111
A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an
Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the
messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low
operational complexity.
Which architecture offers the HIGHEST availability?
A. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone.
Replicate the MySQL database to another Availability Zone.
B. Use Amazon MQ with active/standby brokers con gured across two Availability Zones. Add an additional consumer EC2 instance in another
Availability Zone. Replicate the MySQL database to another Availability Zone.
C. Use Amazon MQ with active/standby brokers con gured across two Availability Zones. Add an additional consumer EC2 instance in
another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
D. Use Amazon MQ with active/standby brokers con gured across two Availability Zones. Add an Auto Scaling group for the consumer EC2
instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
Answer is D as the "HIGHEST available" and less "operational complex"
The "Amazon RDS for MySQL with Multi-AZ enabled" option excludes A and B
The "Auto Scaling group" is more available and reduces operational complexity in case of incidents (as remediation it is automated) than just
adding one more instance. This excludes C.
C and D to choose from based on
D over C since is configured
upvoted 12 times
Most Recent
6 days, 13 hours ago
Selected Answer: D
Amazon MQ with active/standby brokers configured across two AZ ensures high availability for the message broker. In case of a failure in one AZ,
the other AZ's broker can take over seamlessly.
Adding an ASG for the consumer EC2 instances across two AZ provides redundancy and automatic scaling based on demand. If one consumer
instance becomes unavailable or if the message load increases, the ASG can automatically launch additional instances to handle the workload.
Using RDS for MySQL with Multi-AZ enabled ensures high availability for the database. Multi-AZ automatically replicates the database to a standby
instance in another AZ. If a failure occurs, RDS automatically fails over to the standby instance without manual intervention.
This architecture combines high availability for the message broker (Amazon MQ), scalability and redundancy for the consumer EC2 instances
(ASG), and high availability for the database (RDS Multi-AZ). It offers the highest availability with low operational complexity by leveraging
managed services and automated failover mechanisms.
upvoted 1 times
1 week, 4 days ago
Selected Answer: D
Correct answer D
upvoted 1 times
3 weeks ago
Selected Answer: D
to achieve ha + low operational complexity, the solution architect has to choose option D, which fulfill these requirements.
upvoted 1 times
1 month ago
Auto scaling and Multi-AZ enabled for high availability.
upvoted 1 times
3 months, 1 week ago
you can find some details about Amazon MQ active/standby broker for high availability https://docs.aws.amazon.com/amazon-
mq/latest/developer-guide/active-standby-broker-deployment.html
Community vote distribution
D (96%)
4%
upvoted 1 times
5 months ago
Selected Answer: D
D as the Auto Scaling group offer the highest availability between all solutions
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D offers the highest availability because it addresses all potential points of failure in the system:
Amazon MQ with active/standby brokers configured across two Availability Zones ensures that the message queue is available even if one
Availability Zone experiences an outage.
An Auto Scaling group for the consumer EC2 instances across two Availability Zones ensures that the consumer application is able to continue
processing messages even if one Availability Zone experiences an outage.
Amazon RDS for MySQL with Multi-AZ enabled ensures that the database is available even if one Availability Zone experiences an outage.
upvoted 3 times
6 months, 1 week ago
Option A addresses some potential points of failure, but it does not address the potential for the consumer application to become unavailable
due to an Availability Zone outage.
Option B addresses some potential points of failure, but it does not address the potential for the database to become unavailable due to an
Availability Zone outage.
Option C addresses some potential points of failure, but it does not address the potential for the consumer application to become unavailable
due to an Availability Zone outage.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 2 times
7 months, 1 week ago
D is correct
upvoted 1 times
8 months ago
Selected Answer: A
I don't know about D. Active/Standby adds to fault tolerance but does nothing for HA.
upvoted 1 times
6 months, 3 weeks ago
Fault tolerance goes up a level from HA. Active Standby contributes to HA.
upvoted 1 times
7 months, 2 weeks ago
Amazon RDS > MySQL, hence A and B are eliminated
upvoted 1 times
8 months ago
Selected Answer: D
agree with D
upvoted 1 times
Topic 1
Question #112
A company hosts a containerized web application on a eet of on-premises servers that process incoming requests. The number of requests is
growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS
with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling.
Use an Application Load Balancer to distribute the incoming requests.
B. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming
requests.
C. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use
Amazon API Gateway as an entry point to the Lambda functions.
D. Use a high performance computing (HPC) solution such as AWS ParallelCluster to establish an HPC cluster that can process the incoming
requests at the appropriate scale.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
Less operational overhead means A: Fargate (no EC2), move the containers on ECS, autoscaling for growth and ALB to balance consumption.
B - requires configure EC2
C - requires add code (developpers)
D - seems like the most complex approach, like re-architecting the app to take advantage of an HPC platform.
upvoted 11 times
Most Recent
6 days, 12 hours ago
Selected Answer: A
Option A (AWS Fargate on Amazon ECS with Service Auto Scaling) is the best choice as it provides a serverless and managed environment for your
containerized web application. It requires minimal code changes, offers automatic scaling, and utilizes an Application Load Balancer for request
distribution.
Option B (Amazon EC2 instances with an Application Load Balancer) requires manual management of EC2 instances, resulting in more operational
overhead compared to option A.
Option C (AWS Lambda with API Gateway) may require significant code changes and restructuring, introducing complexity and potentially
increasing development effort.
Option D (AWS ParallelCluster) is not suitable for a containerized web application and involves significant setup and configuration overhead.
upvoted 1 times
1 month ago
Selected Answer: A
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2
instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need
to choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: A
Least Operational Overhead = Serverless
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: A
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers on clusters of Amazon EC2
instances. With Fargate, you no longer have to provision, configure, or scale of virtual machines to run containers.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
upvoted 1 times
4 months, 2 weeks ago
A is correct
Community vote distribution
A (100%)
upvoted 1 times
6 months ago
Selected Answer: A
The best solution to meet the requirements with the least operational overhead is Option A: Use AWS Fargate on Amazon Elastic Container Service
(Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming
requests.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A has minimum operational overhead and almost no application code changes.
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
8 months ago
Selected Answer: A
Agreed with A,
lambda will work too but requires more operational overhead (more chores)
with A, you are just moving from an on-prem container to AWS container
upvoted 3 times
Topic 1
Question #113
A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS. A custom application in the
company’s data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and
needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must
con gure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue.
B. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device.
C. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS
Glue.
D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2
instance on AWS to run the transformation application.
Correct Answer:
C
Highly Voted
8 months, 1 week ago
Selected Answer: C
A. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue. - No BW available for DataSync, so "asap" will be
weeks/months (?)
B. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device. - Snowcone will just store 14TB (SSD
configuration).
**C**. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS
Glue. - SnowBall can store 80TB (ok), takes around 1 week to move the device (faster than A), and AWS Glue allows to do ETL jobs. This is the
answer.
D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2
instance on AWS to run the transformation application. - Same as C, but the ETL job requires the deployment/configuration/maintenance of an EC2
instance, while Glue is serverless. This means D has more operational overhead than C.
upvoted 32 times
2 months, 3 weeks ago
I agree. When it said "with least Operational overhead" , it does not takes in account "migration activities" neccesary to reach the "final
photo/scenario". In "operational overhead" schema, you're situated in a "final scenario" and you've only take into account how do you operate
it, and if the operation of that scheme is ALIGHTED (least effort to operate than original scenario), that's the desired state.
upvoted 2 times
4 months, 3 weeks ago
I disagree on D. transformation job is already in place.so, all you have to do is deploy and run on ec2.
C takes more effort to build Glue process, like reinventing the wheel . this is unnecessary
upvoted 5 times
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
Why C? This answer misses the part between SnowBall and AWS Glue.
D at least provides a full-step solution that copies data in snowball device, and installs the custom application in device's EC2 to do the
transformation job.
upvoted 8 times
Most Recent
6 days, 12 hours ago
Option A (AWS DataSync with AWS Glue) involves using AWS DataSync for data transfer, which requires available network bandwidth. Since the
data center has no additional network bandwidth, this option is not suitable.
Option B (AWS Snowcone device with deployment) is designed for smaller workloads and may not have enough storage capacity for transferring
50 TB of data. Additionally, deploying the transformation application on the Snowcone device could introduce complexity and operational
overhead.
Option D (AWS Snowball Edge with EC2 compute) involves transferring the data using a Snowball Edge device and then creating a new EC2
instance in AWS to run the transformation application. This option adds additional complexity and operational overhead of managing an EC2
instance.
In comparison, option C offers a straightforward and efficient approach. The Snowball Edge Storage Optimized device can handle the large data
transfer without relying on network bandwidth. Once the data is transferred, AWS Glue can be used to create the transformation job, ensuring the
continuity of the application's processing in the AWS Cloud.
Community vote distribution
C (67%)
D (33%)
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: C
Correctly answer is C.
“The data center does not have any available network bandwidth for additional workloads.”
upvoted 1 times
1 month, 2 weeks ago
Option is C.
“The data center does not have any available network bandwidth for additional workloads.”
D is new EC instance is need to created. So I choose option C.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: C
LEAST operational overhead = Serverless = Glue
upvoted 3 times
2 months ago
Selected Answer: D
Exist " A custom application in the company’s data center runs a weekly data transformation job"
Because existing previous app rebuild with Glue is more effort
Ans D
upvoted 1 times
2 months, 1 week ago
Selected Answer: C
D is far too manual, lots of overhead
upvoted 2 times
2 months, 1 week ago
Selected Answer: C
I'm voting C and not D because creating a new EC2 instance in Snowball to run the transformation application has more overhead than running
Glue. Another thing to consider is that answer C does not mandate us to install Glue in Snowball, we can run Glue after the data has been uploaded
from Snowball to AWS.
upvoted 2 times
3 months, 1 week ago
Selected Answer: C
C has less operational overhead than D. Managing EC2 has higher operational overhead than serverless AWS Glue
upvoted 2 times
3 months, 3 weeks ago
I was originally going to vote for C, however it is D because of 2 reasons. 1) AWS love to promote their own products, so Glue is most likely and 2)
because Glue presents the least operational overhead moving forward as it is serverless unlike an EC2 instance which requires patching, feeding
and watering
upvoted 2 times
2 months, 1 week ago
Answer C uses Glue, answer D uses EC2, so I believe you probably meant you're voting for C.
upvoted 3 times
3 months, 3 weeks ago
Selected Answer: C
Using the EC2 instance created on the Snowball Edge for the transformation job will do it once , However the solution architect must configure the
transformation job to continue to run in the AWS Cloud so it's AWS Glue
upvoted 1 times
4 months ago
Selected Answer: D
Lets not forget that even a compute optimized Snowball cannot run Glue . Basically a NAS with S3 and EC2 is what you get so cant be C ( unless
you run storage on prem and Glue in cloud with a dx/vpn )
upvoted 1 times
2 months, 1 week ago
The answer does not say you have to run Glue inside snowball edge, it just says you would use Glue, which could be after snowball edge
reaches Amazon facilities and the data is uploaded to AWS.
upvoted 1 times
4 months ago
Is it possible to use AWS Glue service on snowball edge?
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: D
perfect fit is D
upvoted 1 times
5 months ago
.... and the AI maven says :
A solution that would meet these requirements with the least operational overhead is to use AWS Snowball Edge. Snowball Edge is a data transfer
device that can transfer large amounts of data into and out of the AWS cloud with minimal network bandwidth requirements. Additionally,
Snowball Edge can run custom scripts on the device, so the transformation job can be configured to continue running during the transfer. Once the
transfer is complete, the data can be loaded into an AWS storage service such as Amazon S3. This solution would minimize operational overhead
by allowing for a parallel transfer and processing of data, rather than requiring the application to be paused.
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: C
Option B is incorrect. Although you can use AWS DataSync to automate and accelerate data transfer from on-premises to AWS storage services, it’s
not capable of replicating existing applications running on your server.
Option B is incorrect as AWS Snowcone supports data collection and data processing using AWS compute services but supports only 8 TB of HDD-
based hard disk. It’s not the best option for transferring 50 TB of data, as it will require multiple iterations of offline data transfer.
I will go for C as it seem to have less operational overhead.
upvoted 1 times
Topic 1
Question #114
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload
images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and
Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary
signi cantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the
growing user base.
Which solution meats these requirements?
A. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
B. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
D. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store
the photos and metadata.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
Do not store images in databases ;)... correct answer should be C
upvoted 25 times
Most Recent
6 days, 12 hours ago
Selected Answer: C
Solution C offloads the photo processing to Lambda. Storing the photos in S3 ensures scalability and durability, while keeping the metadata in
DynamoDB allows for efficient querying of the associated information.
Option A does not provide an appropriate solution for storing the photos, as DynamoDB is not suitable for storing large binary data like images.
Option B is more focused on real-time streaming data processing and is not the ideal service for processing and storing photos and metadata in
this use case.
Option D involves manual scaling and management of EC2 instances, which is less flexible and more labor-intensive compared to the serverless
nature of Lambda. It may not efficiently handle the varying number of concurrent users and can introduce higher operational overhead.
In conclusion, option C provides the best solution for scaling the application to meet the needs of the growing user base by leveraging the
scalability and durability of Lambda, S3, and DynamoDB.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: C
Option C is the best.
upvoted 1 times
1 month ago
Selected Answer: C
C is the correct answer, A can't store images in DB
upvoted 1 times
2 months ago
Selected Answer: C
Go for C which is able to scale
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
La opción A no es la solución más adecuada para manejar la carga potencialmente alta de usuarios simultáneos, ya que las instancias de Lambda
tienen un límite de tiempo de ejecución y la carga alta puede causar un retraso significativo en la respuesta de la aplicación. Además, no se
proporciona una solución escalable para almacenar las imágenes.
La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede utilizar
AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y altamente
Community vote distribution
C (100%)
disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto rendimiento que puede
manejar una gran cantidad de solicitudes simultáneas.
upvoted 3 times
6 days, 12 hours ago
Si Señior Siarra!
upvoted 1 times
2 months, 3 weeks ago
C!
La opción A no es la solución más adecuada para manejar la carga potencialmente alta de usuarios simultáneos, ya que las instancias de Lambda
tienen un límite de tiempo de ejecución y la carga alta puede causar un retraso significativo en la respuesta de la aplicación. Además, no se
proporciona una solución escalable para almacenar las imágenes.
La opción C proporciona una solución escalable para el procesamiento y almacenamiento de imágenes y metadatos. La aplicación puede utilizar
AWS Lambda para procesar las fotos y almacenar las imágenes en Amazon S3, que es un servicio de almacenamiento escalable y altamente
disponible. Los metadatos pueden almacenarse en DynamoDB, que es un servicio de base de datos escalable y de alto rendimiento que puede
manejar una gran cantidad de solicitudes simultáneas.
upvoted 1 times
3 months, 2 weeks ago
C is the answer
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: C
most optimal solution
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Have look in that discution https://www.quora.com/How-can-I-use-DynamoDB-for-storing-metadata-for-Amazon-S3-objects
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C involves using AWS Lambda to process the photos and storing the photos in Amazon S3, which can handle a large amount of data and
scale to meet the needs of the growing user base. Retaining DynamoDB to store the metadata allows the application to continue to use a fast and
highly available database for this purpose.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
According to the well-designed framework, option C is the safest and most efficient option.
upvoted 1 times
6 months, 1 week ago
Static content, C
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
C. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
This solution meets the requirements because it uses AWS Lambda to process the photos, which can automatically scale to meet the needs of the
growing user base. The photos can be stored in Amazon S3, which is a highly scalable and durable object storage service. DynamoDB can be
retained to store the metadata, which can also scale to meet the needs of the growing user base. This solution allows the application to scale to
meet the needs of the growing user base, while also ensuring that the photos and metadata are stored in a scalable and durable manner.
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
photo needs to be on S3
upvoted 1 times
7 months ago
Selected Answer: C
C for sure
I was originally leaning toward A because it seemed like a simpler setup to keep the images and metadata in the same service, but DynamoDB has
a record limit of 64KB, so S3 would be better for image storage and then DynamoDB for metadata
upvoted 4 times
Topic 1
Question #115
A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data les that are stored on
Amazon S3. The EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any
other network access.
A new requirement mandates that the network tra c for le transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?
A. Create a NAT gateway. Con gure the route table for the public subnets to send tra c to Amazon S3 through the NAT gateway.
B. Con gure the security group for the EC2 instances to restrict outbound tra c so that only tra c to the S3 pre x list is permitted.
C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private
subnets.
D. Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route tra c to Amazon S3 over the Direct
Connect connection.
Correct Answer:
C
6 days, 12 hours ago
Selected Answer: C
Option A (creating a NAT gateway) would not meet the requirement since it still involves sending traffic to S3 over the internet. NAT gateway is
used for outbound internet connectivity from private subnets, but it doesn't provide a private route for accessing S3.
Option B (configuring security groups) focuses on controlling outbound traffic using security groups. While it can restrict outbound traffic, it
doesn't provide a private route for accessing S3.
Option D (setting up Direct Connect) involves establishing a dedicated private network connection between the on-premises environment and
AWS. While it offers private connectivity, it is more suitable for hybrid scenarios and not necessary for achieving private access to S3 within the VPC.
In summary, option C provides a straightforward solution by moving the EC2 instances to private subnets, creating a VPC endpoint for S3, and
linking the endpoint to the route table for private subnets. This ensures that file transfer traffic between the EC2 instances and S3 remains within
the private network without going over the internet.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
According to the well-designed framework, option C is the safest and most efficient option.
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
The correct answer is C. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table
for the private subnets.
To meet the new requirement of transferring files over a private route, the EC2 instances should be moved to private subnets, which do not have
direct access to the internet. This ensures that the traffic for file transfers does not go over the internet.
To enable the EC2 instances to access Amazon S3, a VPC endpoint for Amazon S3 can be created. VPC endpoints allow resources within a VPC to
communicate with resources in other services without the traffic being sent over the internet. By linking the VPC endpoint to the route table for the
private subnets, the EC2 instances can access Amazon S3 over a private connection within the VPC.
upvoted 3 times
6 months, 1 week ago
Option A (Create a NAT gateway) would not work, as a NAT gateway is used to allow resources in private subnets to access the internet, while
the requirement is to prevent traffic from going over the internet.
Option B (Configure the security group for the EC2 instances to restrict outbound traffic) would not achieve the goal of routing traffic over a
private connection, as the traffic would still be sent over the internet.
Option D (Remove the internet gateway from the VPC and set up an AWS Direct Connect connection) would not be necessary, as the
requirement can be met by simply creating a VPC endpoint for Amazon S3 and routing traffic through it.
upvoted 1 times
5 months, 2 weeks ago
How about the question of moving the instances across subnets. Because according to AWS you can't do it.
https://aws.amazon.com/premiumsupport/knowledge-center/move-ec2-
Community vote distribution
C (100%)
instance/#:~:text=It%27s%20not%20possible%20to%20move,%2C%20Availability%20Zone%2C%20or%20VPC.
Kindly clarify. Maybe I miss something.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
7 months ago
C is correct.
There is no requirement for public access from internet.
Application must be moved in Private subnet. This is a prerequisite in using VPC endpoints with S3
https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/
upvoted 4 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
Use VPC endpoint
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
User VPC endpoint and make the EC2 private
upvoted 1 times
7 months, 2 weeks ago
Use VPC endpoint
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
VPC endpoint is the best choice to route S3 traffic without traversing internet. Option A alone can't be used as NAT Gateway requires an Internet
gateway for outbound internet traffic. Option B would still require traversing through internet and option D is also not a suitable solution
upvoted 3 times
Topic 1
Question #116
A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are
burdensome. The company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need
to have any dynamic content available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)
A. Con gure Amazon CloudFront in front of the website to use HTTPS functionality.
B. Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality.
C. Create and deploy an AWS Lambda function to manage and serve the website content.
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
E. Create the new website. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.
Correct Answer:
AD
Highly Voted
8 months, 1 week ago
A -> We can configure CloudFront to require HTTPS from clients (enhanced security)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html
D -> storing static website on S3 provides scalability and less operational overhead, then configuration of Application LB and EC2 instances (hence
E is out)
B is out since AWS WAF Web ACL does not to provide HTTPS functionality, but to protect HTTPS only.
upvoted 22 times
Highly Voted
8 months ago
Selected Answer: AD
agree with A and D
static website -> obviously S3, and S3 is super scalable
CDN -> CloudFront obviously as well, and with HTTPS security is enhanced.
B does not make sense because you are not replacing the CDN with anything,
E works too but takes too much effort and compared to S3, S3 still wins in term of scalability. plus why use EC2 when you are only hosting static
website
upvoted 5 times
2 weeks, 5 days ago
Amazon CloudFront is for Securely deliver content with low latency and high transfer speeds
But what about the SQLinjection XSS attacks? we use WAF and olso use HTTPS
https://www.f5.com/glossary/web-application-firewall-
waf#:~:text=A%20WAF%20protects%20your%20web,and%20what%20traffic%20is%20safe.
WAF protects your web apps by filtering, monitoring, and blocking any malicious HTTP/S traffic traveling to the web application, and prevents
any unauthorized data from leaving the app.
Answer is WAF Not Cloudfront
upvoted 1 times
1 month, 3 weeks ago
does not need to have any dynamic content available
upvoted 1 times
Most Recent
6 days, 12 hours ago
Selected Answer: AD
A. Amazon CloudFront provides scalable content delivery with HTTPS functionality, meeting security and scalability requirements.
D. Deploying the website on an Amazon S3 bucket with static website hosting reduces operational overhead by eliminating server maintenance
and patching.
Why other options are incorrect:
B. AWS WAF does not provide HTTPS functionality or address patching and maintenance.
C. Using AWS Lambda introduces complexity and does not directly address patching and maintenance.
E. Managing EC2 instances and an Application Load Balancer increases operational overhead and does not minimize patching and maintenance
tasks.
Community vote distribution
AD (79%)
11%
7%
In summary, configuring Amazon CloudFront for HTTPS and deploying on Amazon S3 with static website hosting provide security, scalability, and
reduced operational overhead.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: AD
AD
A for enhanced security D for static content
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: AD
LEAST operational overhead = Serverless
https://aws.amazon.com/serverless/
upvoted 2 times
1 month, 4 weeks ago
AD misses the operational part, how can the app work without a lambda function, an EC2 instance or something?
upvoted 1 times
2 months, 1 week ago
Selected Answer: AD
people do not seem to get the LEAST OPERATIONAL OVERHEAD statement, many people keep voting for options that bring far too Op work
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: AD
A for enhanced security
D for static content
upvoted 2 times
3 months, 1 week ago
Since Amazon S3 is unlimited and you pay as you go so it means there will be no limit to scale as long as your data is going to grow, so D is one of
the correct answers and another correct answer is A, because of this:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
so my answer is AD.
upvoted 1 times
4 months ago
I vote A & C for the reason being least operational overhead.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: AD
Here a perfect explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
upvoted 1 times
5 months ago
Selected Answer: AD
Simple and secure
upvoted 1 times
5 months, 1 week ago
Selected Answer: AD
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
A. Configure Amazon CloudFront in front of the website to use HTTPS functionality.
By deploying the website on an S3 bucket with static website hosting enabled, the company can take advantage of the high scalability and cost-
efficiency of S3 while also reducing the operational overhead of managing and patching a CMS.
By configuring Amazon CloudFront in front of the website, it will automatically handle the HTTPS functionality, this way the company can have a
secure website with very low operational overhead.
upvoted 1 times
6 months, 1 week ago
Selected Answer: CD
KEYWORD: LEAST operational overhead
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
C. Create and deploy an AWS Lambda function to manage and serve the website content.
Option D (using Amazon S3 with static website hosting) would provide high scalability and enhanced security with minimal operational overhead
because it requires little maintenance and can automatically scale to meet increased demand.
Option C (using an AWS Lambda function) would also provide high scalability and enhanced security with minimal operational overhead. AWS
Lambda is a serverless compute service that runs your code in response to events and automatically scales to meet demand. It is easy to set up and
requires minimal maintenance.
upvoted 3 times
6 months, 1 week ago
Why other options are not correct?
Option A (using Amazon CloudFront) and Option B (using an AWS WAF web ACL) would provide HTTPS functionality but would require
additional configuration and maintenance to ensure that they are set up correctly and remain secure.
Option E (using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer) would provide high scalability, but it
would require more operational overhead because it involves managing and maintaining EC2 instances.
upvoted 1 times
6 months, 1 week ago
Selected Answer: AD
A and D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: AD
A: for high availability and security through cloudfront HTTPS
D: Scalable storge solution and support of static hosting
upvoted 1 times
7 months, 1 week ago
A and D
upvoted 1 times
Topic 1
Question #117
A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs
in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?
A. Con gure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
B. Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon
Elasticsearch Service).
C. Create an Amazon Kinesis Data Firehose delivery stream. Con gure the log group as the delivery streams sources. Con gure Amazon
OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
D. Install and con gure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Con gure
Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Correct Answer:
C
Highly Voted
8 months ago
Selected Answer: A
answer is A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
> You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in NEAR REAL-TIME
through a CloudWatch Logs subscription
least overhead compared to kinesis
upvoted 45 times
5 months, 4 weeks ago
Option A (Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service)) is not a
suitable option, as a CloudWatch Logs subscription is designed to send log events to a destination such as an Amazon Simple Notification
Service (Amazon SNS) topic or an AWS Lambda function. It is not designed to write logs directly to Amazon Elasticsearch Service (Amazon ES).
upvoted 3 times
4 months, 2 weeks ago
that is not true, you can stream logs from CloudWatch Logs directly to OpenSearch
upvoted 3 times
5 months, 3 weeks ago
Zerotn3 is right! There should be a Lambda for writing into ES
upvoted 1 times
8 months ago
Great link. Convinced me
upvoted 5 times
Highly Voted
6 months, 1 week ago
Selected Answer: C
The correct answer is C: Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream source. Configure
Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
This solution uses Amazon Kinesis Data Firehose, which is a fully managed service for streaming data to Amazon OpenSearch Service (Amazon
Elasticsearch Service) and other destinations. You can configure the log group as the source of the delivery stream and Amazon OpenSearch
Service as the destination. This solution requires minimal operational overhead, as Kinesis Data Firehose automatically scales and handles data
delivery, transformation, and indexing.
upvoted 10 times
2 weeks, 5 days ago
ANSWER A
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/integrations.html
You can use CloudWatch or Kinesis, but in the Kinesis description it never says real time, however in the Cloudwatch description it does say Real
time ""You can load streaming data from CloudWatch Logs to your OpenSearch Service domain by using a CloudWatch Logs subscription . For
information about Amazon CloudWatch subscriptions, see Real-time processing of log data with subscriptions.""
upvoted 1 times
Community vote distribution
A (67%)
C (31%)
6 months, 1 week ago
Option A: Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would
also work, but it may require more operational overhead as you would need to set up and manage the subscription and ensure that the logs are
delivered in near-real time.
Option B: Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon
Elasticsearch Service) would also work, but it may require more operational overhead as you would need to set up and manage the Lambda
function and ensure that it scales to handle the incoming logs.
Option D: Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure
Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but it may require
more operational overhead as you would need to install and configure the Kinesis Agent on each application server and set up and manage the
Kinesis Data Streams.
upvoted 2 times
5 months ago
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times
Most Recent
6 days, 11 hours ago
By configuring a CloudWatch Logs subscription, you can stream the logs from CloudWatch Logs to Amazon OpenSearch Service in near-real-time.
This solution requires minimal operational overhead as it leverages the built-in functionality of CloudWatch Logs and Amazon OpenSearch Service
for log streaming and indexing.
Option B (Creating an AWS Lambda function) would involve additional development effort and maintenance of a custom Lambda function to write
the logs to Amazon OpenSearch Service.
Option C (Creating an Amazon Kinesis Data Firehose delivery stream) introduces an additional service (Kinesis Data Firehose) that may not be
necessary for this specific requirement, adding unnecessary complexity.
Option D (Installing and configuring Amazon Kinesis Agent) also introduces additional overhead in terms of manual installation and configuration
on each application server, which may not be needed if the logs are already stored in CloudWatch Logs.
In summary, option A is the correct choice as it provides a straightforward and efficient way to stream logs from CloudWatch Logs to Amazon
OpenSearch Service with minimal operational overhead.
upvoted 2 times
5 days, 23 hours ago
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
upvoted 1 times
2 weeks, 1 day ago
Selected Answer: C
I vote for C.
Solution A add unnecessary hop
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: C
A is wrong because subscriptions cannot be sent directly to Opensearch, see 'destination arn' in
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
Correct answer is C
upvoted 1 times
1 month ago
@six _fingers is right!!!! You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in
near real-time through a CloudWatch Logs subscription.
answer is A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
This should be C. OpenSearch is one of the main destinations for Kinesis Data Firehose.
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: C
C for me and ChatGPT
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: C
choose C after seeing all comments from community
upvoted 2 times
3 months ago
Selected Answer: C
Must be C, https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
"You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in near real-time through a
CloudWatch Logs subscription. For more information, see Real-time processing of log data with subscriptions.".
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
"You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an
Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems."
CloudWatch cannot stream directly to Amazon OpenSearch Service.
upvoted 3 times
2 weeks, 6 days ago
The link above supports answer A not C, there is no mention of Kinesis
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: A
The correct answer remains A. Kindly check the link for a confirmation.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 3 times
5 months ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
Option C (Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream's sources. Configure Amazon
OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination) would be the best option as it allows to easily and securely
stream logs from CloudWatch Logs to Amazon Elasticsearch Service in near-real time with minimal operational overhead. Data Firehose is designed
specifically for data stream processing and can automatically handle tasks such as data transformation, data validation, and data loading,
simplifying the process of sending logs to Amazon Elasticsearch Service.
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
A. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
This solution meets the requirement of storing all application logs in Amazon OpenSearch Service (Amazon Elasticsearch Service) with the least
operational overhead. A CloudWatch Logs subscription allows you to automatically stream logs from CloudWatch Logs to a destination such as
Elasticsearch Service, Kinesis Data Streams, or Lambda without the need for additional configurations and management.
It eliminates the need for additional infrastructure, Lambda functions and configurations, or separate agents to handle the logs transfer to
Elasticsearch Service.
upvoted 3 times
5 months, 1 week ago
Answer : A
Based on Keywords and Documentation : A is the Answer
You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in "near real-time through a
CloudWatch Logs subscription"
upvoted 1 times
4 months, 2 weeks ago
But CloudWatch Logs log group does NOT support store(write) performance. It just stream data to Amazon OpenSearch Service.
upvoted 1 times
5 months, 2 weeks ago
The answer is C. The " in near-real time" makes it more accurate and least operational overhead.
upvoted 3 times
Topic 1
Question #118
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide
access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods
of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the
application at all times. The company is concerned about the overall cost of the solution.
Which storage solution meets these requirements MOST cost-effectively?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Amazon OpenSearch Service (Amazon Elasticsearch Service)
D. Amazon S3
Correct Answer:
D
6 days, 10 hours ago
Selected Answer: D
Amazon S3 (Simple Storage Service) is a highly scalable and cost-effective storage service. It is well-suited for storing large amounts of data, such
as the 900 TB of text documents mentioned in the scenario. S3 provides high durability, availability, and performance.
Option A (Amazon EBS) is block storage designed for individual EC2 instances and may not scale as seamlessly and cost-effectively as S3 for large
amounts of data.
Option B (Amazon EFS) is a scalable file storage service, but it may not be the most cost-effective option compared to S3, especially for the
anticipated storage size of 900 TB.
Option C (Amazon OpenSearch Service) is a search and analytics service and may not be suitable as the primary storage solution for the text
documents.
In summary, Amazon S3 is the recommended choice as it offers high scalability, cost-effectiveness, and durability for storing the large repository of
text documents required by the web application.
upvoted 1 times
1 month ago
Selected Answer: D
900 in the question to divert our Thinking.When you have keyword least in question S3 will be only thing we should look
upvoted 1 times
1 month ago
EFS and S3 meet the requirements but S3 is a better option because it is cheaper.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: D
MOST cost-effective = S3 (unless explicitly stated in the requirements)
upvoted 2 times
2 months, 1 week ago
Selected Answer: D
S3 is the cheapest and most scalable.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
Now in OpenSearch you can reach at 3 PB so option C is better.
With S3 in an intensive scenario the costs of retriving the buckets could be high.
Yes OpenSearch is NOT cheap but this has to be analysed carefully.
So, I opt "C" to increase the discussion.
With UltraWarm, you can retain up to 3 PB of data on a single Amazon OpenSearch Service cluster, while reducing your cost per GB by nearly 90%
compared to the warm storage tier. You can also easily query and visualize the data in your Kibana interface (version 7.10 and earlier) or
OpenSearch Dashboards. Analyze both your recent (weeks) and historical (months or years) log data without spending hours or days restoring
archived logs.
Community vote distribution
D (93%)
7%
https://aws.amazon.com/es/opensearch-service/features/
upvoted 2 times
2 months, 3 weeks ago
EFS is a good option but expensive alongside S3 and customer concerned about cost - thus: S3 (D)
upvoted 2 times
3 months ago
I wonder why people choose S3, yet S3 max capacity is 5TB
🤔
.
upvoted 1 times
3 months ago
My bad, the 5TB limit is for individual files. S3 has virtually unlimited storage capacity.
upvoted 5 times
4 months, 1 week ago
Selected Answer: D
A. It is Not a block storage
B. It is Not a file storage
C. Opensearch is useful but can only accommodate up to 600TiB and is mainly for search and anaytics.
D. S3 is more cost effective than all and can handle all objects like Block, File or Text.
upvoted 4 times
5 months, 1 week ago
Selected Answer: D
D. Amazon S3
Amazon S3 is an object storage service that can store and retrieve large amounts of data at any time, from anywhere on the web. It is designed for
high durability, scalability, and cost-effectiveness, making it a suitable choice for storing a large repository of text documents. With S3, you can
store and retrieve any amount of data, at any time, from anywhere on the web, and you can scale your storage up or down as needed, which will
help to meet the demand of the web application. Additionally, S3 allows you to choose between different storage classes, such as standard,
infrequent access, and archive, which will enable you to optimize costs based on your specific use case.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
The most cost-effective storage solution for a web application that needs to scale to meet high demand and store a large repository of text
documents would be Amazon S3. Amazon S3 is an object storage service that is designed for durability, availability, and scalability. It can store and
retrieve any amount of data from anywhere on the internet, making it a suitable choice for storing a large repository of text documents.
Additionally, Amazon S3 is designed to be highly scalable and can easily handle periods of high demand without requiring any additional
infrastructure or maintenance.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: D
Is there anything cheaper than S3?
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
D. Amazon S3 is the most cost-effective storage solution that meets the requirements described.
Amazon S3 is an object storage service that is designed to store and retrieve large amounts of data from anywhere on the web. It is highly scalable,
highly available, and cost-effective, making it an ideal choice for storing a large repository of text documents that will experience periods of high
demand. S3 is a standalone storage service that can be accessed from anywhere, and it is designed to handle large numbers of objects, making it
well-suited for storing the 900 TB repository of text documents described in the scenario. It is also designed to handle high levels of demand,
making it suitable for handling periods of high demand.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Only EFS and S3 meeting the requirements but S3 is better option because it is cheaper.
upvoted 4 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: D
Only EFS and S3, Since EFS is make it much costly, S3 is the viable option
upvoted 4 times
Topic 1
Question #119
A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2
Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL
injection and cross-site scripting attacks.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.
B. Set up AWS Firewall Manager in both Regions. Centrally con gure AWS WAF rules.
C. Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.
D. Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.
Correct Answer:
A
Highly Voted
7 months, 3 weeks ago
Selected Answer: B
If you want to use AWS WAF across accounts, accelerate WAF configuration, automate the protection of new resources, use Firewall Manager with
AWS WAF
upvoted 18 times
Highly Voted
7 months, 3 weeks ago
B
Using AWS WAF has several benefits. Additional protection against web attacks using criteria that you specify. You can define criteria using
characteristics of web requests such as the following:
Presence of SQL code that is likely to be malicious (known as SQL injection).
Presence of a script that is likely to be malicious (known as cross-site scripting).
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections.
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
upvoted 13 times
6 months, 1 week ago
Q: Can I create security policies across regions?
No, AWS Firewall Manager security policies are region specific. Each Firewall Manager policy can only include resources available in that
specified AWS Region. You can create a new policy for each region where you operate.
So you could not centrally (i.e. in one place) configure policies, you would need to do this is each region
upvoted 2 times
Most Recent
6 days, 10 hours ago
Selected Answer: B
B. By setting up AWS Firewall Manager, you can centrally configure AWS WAF rules, which can be applied to multiple AWS accounts and Regions.
This allows for efficient management and enforcement of security rules across accounts without the need for separate configuration in each
individual Region.
Option A (Setting up AWS WAF with Regional web ACLs) requires setting up and managing AWS WAF in each Region separately, which increases
administrative effort.
Option C (Setting up AWS Shield with Regional web ACLs) primarily focuses on DDoS protection and may not provide the same level of protection
against SQL injection and cross-site scripting attacks as AWS WAF.
Option D (Setting up AWS Shield in one Region) provides DDoS protection but does not directly address protection against SQL injection and
cross-site scripting attacks.
In summary, option B offers the most efficient and centralized approach by leveraging AWS Firewall Manager to configure AWS WAF rules across
multiple Regions, minimizing administrative effort while ensuring protection against SQL injection and cross-site scripting attacks.
upvoted 1 times
1 month ago
AAAAAAAAAAA
upvoted 1 times
2 months ago
Community vote distribution
B (73%)
A (27%)
Crazy community voting !
Correct answer is => A : AWS Firewall Manager security policies are region specific. Each Firewall Manager policy can only include resources
available in that specified AWS Region.
upvoted 3 times
2 months, 3 weeks ago
Selected Answer: B
La opción A proporciona protección contra inyecciones SQL y secuencias de comandos entre sitios utilizando AWS WAF, que es una solución de
firewall de aplicaciones web. Sin embargo, esta opción requiere que se configure AWS WAF en cada región individualmente y se asocie una lista de
control de acceso web (ACL) con una etapa de API. Esto puede resultar en un esfuerzo administrativo significativo si hay varias regiones y etapas de
API que se deben proteger.
La opción B es una solución centralizada que utiliza AWS Firewall Manager para administrar las reglas de AWS WAF en múltiples regiones. Con esta
opción, es posible configurar las reglas de AWS WAF en una sola ubicación y aplicarlas a todas las regiones relevantes de manera uniforme. Esta
solución puede reducir significativamente el esfuerzo administrativo en comparación con la opción A.
upvoted 3 times
3 months ago
Prerequisites for using AWS Firewall Manager
Your account must be a member of AWS Organizations
Your account must be the AWS Firewall Manager administrator
You must have AWS Config enabled for your accounts and Regions
To manage AWS Network Firewall or Route 53 resolver DNS Firewall, the AWS Organizations management account must enable AWS Resource
Access Manager (AWS RAM).
can anybody explain me least Administration efficiency
i will go with A
if ı am wrong anybody correct me
upvoted 1 times
2 months, 3 weeks ago
When they said "LEAST amount of administrative effort" they ignore the "transition costs" associated to get the final scenario. Only takes
account the administration effort supposing all the migration task & prerrequisites were done.
So B is probably, BEST.
upvoted 1 times
4 months ago
Selected Answer: B
https://aws.amazon.com/blogs/security/centrally-manage-aws-waf-api-v2-and-aws-managed-rules-at-scale-with-firewall-manager/
upvoted 1 times
4 months ago
B.
Set up AWS Firewall Manager
https://docs.aws.amazon.com/waf/latest/developerguide/enable-disabled-region.html
Create WAF policies separate for each Region:
https://docs.aws.amazon.com/waf/latest/developerguide/get-started-fms-create-security-policy.html
To protect resources in multiple Regions (other than CloudFront distributions), you must create separate Firewall Manager policies for each Region.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: A
I' ll go with A.
B is wrong because
To protect resources in multiple Regions (other than CloudFront distributions), you must create separate Firewall Manager policies for each Region.
https://docs.aws.amazon.com/waf/latest/developerguide/get-started-fms-create-security-policy.html
upvoted 5 times
5 months, 3 weeks ago
Though Option A and B are valid, the question is on Administration efficiency. Since only 2 regions are in consideration, it is much easier to
provision WAF than a central Firewall Manager (plus WAF).
Regarding "to protect API Gateways across multiple accounts". may be it is an extra information. Web ACLs are at regional level, essentially filters
out HTTP messages irrespective of the account i.e., it is applicable to all accounts.
upvoted 1 times
4 months, 1 week ago
A & B are viable options, however because it is two regions instead of creating WAF twice (one for each region) simply create it all at once in the
Central Firewall Manager. Imagine you need to make some changes later and again rather than changing it on each, 1 by 1 simply change it on
the Central Firewall Manager once and you can deploy more in the future by just adding regions.
upvoted 2 times
5 months, 3 weeks ago
Option A: WAF
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
Use AWS WAF and set up a managed rule to block request patterns associated with the exploitation of SQL databases, like SQL injection attacks.
Associate it with the Application Load Balancer. Integrate AWS WAF with AWS Firewall Manager to reuse the rules across all the AWS accounts.
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: B
B. Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
To protect Amazon API Gateway managed REST APIs from SQL injection and cross-site scripting attacks across multiple accounts with the least
amount of administrative effort, you can set up AWS Firewall Manager in both Regions and centrally configure AWS WAF rules.
upvoted 1 times
6 months ago
Selected Answer: B
Clarified here https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 1 times
6 months ago
Selected Answer: B
Option B, setting up AWS Firewall Manager in both Regions and centrally configuring AWS WAF rules, would require the least amount of
administrative effort.
AWS Firewall Manager is a centralized service that enables you to set security policies across your accounts and applications, including API
Gateway-managed REST APIs. By setting up AWS Firewall Manager in both Regions and centrally configuring AWS WAF rules, you can protect your
APIs from SQL injection and cross-site scripting attacks with minimal effort, as the rules will be centrally managed and automatically enforced
across all of your accounts and applications.
upvoted 4 times
6 months, 1 week ago
Selected Answer: B
Option B involves setting up AWS Firewall Manager in both regions and centrally configuring AWS WAF rules. This allows you to manage the
protection of your APIs across multiple accounts and regions from a central location, reducing the administrative effort required.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Correct answer - A
WAF - HTTP headers, HTTP body, or URI strings Protects from common attack - SQL
injection and Cross-Site Scripting (XSS)
upvoted 2 times
Topic 1
Question #120
A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-
2 Region. Most of the company's users are located in the United States and Europe. The company wants to improve the performance and
availability of the solution. The company launches and con gures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as
targets for a new NLB.
Which solution can the company use to route tra c to all the EC2 instances?
A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs. Create an Amazon CloudFront distribution.
Use the Route 53 record as the distribution’s origin.
B. Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as
endpoints for the endpoint groups.
C. Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing policy to route requests to one of the
six EC2 instances. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution's origin.
D. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to
one of the two ALBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
Correct Answer:
A
Highly Voted
8 months ago
B is the correct one for seld manage DNS
If need to use Route53, ALB (layar 7 ) needs to be used as end points for 2 reginal x 3 EC2s, if it the case answer would be the option 4
upvoted 9 times
Highly Voted
8 months, 1 week ago
Selected Answer: B
for me it is B
upvoted 8 times
Most Recent
6 days, 10 hours ago
Selected Answer: B
Option B offers a global solution by utilizing Global Accelerator. By creating a standard accelerator and configuring endpoint groups in both
Regions, the company can route traffic to all the EC2 across multiple regions. Adding the two NLBs as endpoints ensures that traffic is distributed
effectively.
Option A does not directly address the requirement of routing traffic to all EC2 instances. It focuses on routing based on geolocation and using
CloudFront as a distribution, which may not achieve the desired outcome.
Option C involves managing Elastic IP addresses and routing based on geolocation. However, it may not provide the same level of performance
and availability as AWS Global Accelerator.
Option D focuses on ALBs and latency-based routing. While it can be a valid solution, it does not utilize AWS Global Accelerator and may require
more configuration and management compared to option B.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: B
Correctly is B.
if it is self-managed DNS, you cannot use Route 53. There can be only 1 DNS service for the domain.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
For self-managed DNS solution:
https://aws.amazon.com/blogs/security/how-to-protect-a-self-managed-dns-service-against-ddos-attacks-using-aws-global-accelerator-and-aws-
shield-advanced/
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
Re-wording the correct explanations here:
if it is self-managed DNS, you cannot use Route 53. There can be only 1 DNS service for the domain. If the question didn't mentioned self-
Community vote distribution
B (67%)
A (28%)
5%
managed DNS and asked for optimal solution, then D is correct.
upvoted 2 times
1 month, 3 weeks ago
Using self managed DNS - other three options talking about Route 53 so B can only B answer
upvoted 1 times
2 months ago
I think both answer A and B is solutions
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
The first half of Option A seems right. "Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs.", however,
for the second part "Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin." , it's totally useless. Route 53
can use geolocation routing directly route rquest to the NLBs
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/global-accelerator/?icmpid=docs_homepage_networking
explanation:
AWS Global Accelerator Documentation
AWS Global Accelerator is a network layer service in which you create accelerators to improve the security, availability, and performance of your
applications for local and global users. Depending on the type of accelerator that you choose, you can gain additional benefits, such as improving
availability or mapping users to specific destination endpoints.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
option A although mentions geolocation routing and would allow the company to route traffic based on the location of the user. However, the
company has already implemented a self-managed DNS solution and wants to use NLBs for load balancing, so it may not be feasible for them to
switch to Route 53 and CloudFront.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
option A although mentions geolocation routing and would allow the company to route traffic based on the location of the user. However, the
company has already implemented a self-managed DNS solution and wants to use NLBs for load balancing, so it may not be feasible for them to
switch to Route 53 and CloudFront.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
La opción A no es la solución óptima porque aunque puede enrutar el tráfico a uno de los dos NLB en función de la geolocalización, aún no
proporciona una solución global para enrutar el tráfico a todas las instancias EC2.
La opción B es la solución adecuada porque permite que la empresa utilice AWS Global Accelerator para enrutar el tráfico a los NLB en ambas
regiones, lo que permite que el tráfico se enrute automáticamente a las instancias EC2 en ambas regiones. AWS Global Accelerator se encarga de
enrutar el tráfico de manera óptima a través de la red global de AWS para minimizar la latencia y mejorar el rendimiento y la disponibilidad de la
solución.
upvoted 3 times
2 months, 3 weeks ago
Gracias
upvoted 1 times
3 months ago
Selected Answer: B
?"The company wants to improve the performance and availability of the solution": Geo location might be a good option if the question stressed
on limiting access based on location. Since performance and availability are needed B is the right choice.
upvoted 2 times
3 months, 1 week ago
Selected Answer: B
Both A and B will do the job... B provides access to the AWS backbone and therefore better performance
upvoted 3 times
3 months, 1 week ago
Selected Answer: B
"self-managed DNS solution". You cannot make anything in Route53 if you don´t use :-) Answer is B
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: B
I vote B. "A" doesn't sound right. When NLB is used, it means it is redicting TCP/IP packets. CloudFont is used for Http request, not for TCP/IP
upvoted 1 times
Topic 1
Question #121
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in
a Multi-AZ deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?
A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB
instance.
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore encrypted snapshot to an existing DB
instance.
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS)
managed keys (SSE-KMS).
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
"You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption to an
unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can then restore a
DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance."
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 35 times
1 month, 1 week ago
How can A gurantee future encryption?
upvoted 1 times
Most Recent
6 days, 9 hours ago
Selected Answer: C
A. Replacing the existing DB instance with an encrypted snapshot can result in downtime and potential data loss during migration.
B. Creating a new encrypted EBS volume for snapshots does not address the encryption of the DB instance itself.
D. Copying snapshots to an encrypted S3 bucket only encrypts the snapshots, but does not address the encryption of the DB instance.
Option C is the most suitable as it involves copying and encrypting the snapshots using AWS KMS, ensuring encryption for both the database and
snapshots.
upvoted 1 times
1 month ago
If daily snapshots are taken from the daily DB instance. Why create another copy? You just need to encrypt the latest daily DB snapshot and the
restore from the existing encrypted snapshot.
upvoted 1 times
2 months ago
If there is anyone who is willing to share his/her contributor access, then please write to vinaychethi99@gmail.com
upvoted 1 times
2 months ago
Selected Answer: A
You can't restore from a DB snapshot to an existing DB instance; a new DB instance is created when you restore.
upvoted 3 times
2 months, 1 week ago
A and C are almost similar except that A is latest snapshot, while C is snapshots (all the snapshots).
I don't see any other difference btw those two options.
Option A is clearly the correct on as all you need is the latest snapshot.
upvoted 1 times
2 months, 2 weeks ago
A
You can only encrypt an Amazon RDS DB instance when you create it, not after the DB instance is created.
However, because you can encrypt a copy of an unencrypted snapshot, you can effectively add encryption to an unencrypted DB instance. That is,
Community vote distribution
A (76%)
C (22%)
you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the
encrypted snapshot, and thus you have an encrypted copy of your original DB instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
upvoted 1 times
3 months ago
Selected Answer: C
Encryption is enabled during the Copy process itself.
https://repost.aws/knowledge-center/encrypt-rds-snapshots
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
C is the more complete answer as you need KMS to encrypt the snapshot copy prior to restoring it to the Database instance.
upvoted 1 times
2 months, 3 weeks ago
BUT you can't restore encrypted snapshot to an existing DB instance.Only no NEW DB (not an existing one). The procedure described in this
way:
"(...) you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of
that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance."
refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: C
A not resolve data create in future.
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created.
C will make this, see image below
Architecture
Source architecture
Unencrypted RDS DB instance
Target architecture
Encrypted RDS DB instance
The destination RDS DB instance is created by restoring the DB snapshot copy of the source RDS DB instance.
An AWS KMS key is used for encryption while restoring the snapshot.
An AWS DMS replication task is used to migrate the data.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 1 times
3 months, 1 week ago
Option A seems correct.
With option (A) we already have DB snapshots. Just encrypt the latest available copy of snapshot, why to copy the snapshot once again (as told
in option C).
upvoted 1 times
4 months, 2 weeks ago
A
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption to an
unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can then restore a
DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance. If your project allows for downtime (at least for
write transactions) during this activity, this is all you need to do. When the new, encrypted copy of the DB instance becomes available, you can
point your applications to the new database.
upvoted 1 times
4 months, 3 weeks ago
It's A for the following reasons :
--> To restore an Encrypted DB Instance from an encrypted snapshot we'll need to replace the old one - as we cannot enable encryption on an
existing DB Instance
--> We have both Snap/Db Instance encrypted moving forward since all the daily Backups on an already encrypted DB Instance would be
encrypted
upvoted 1 times
5 months ago
Selected Answer: C
C is right
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption to an
unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can then restore a
DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance.
Tools used to enable encryption:
AWS KMS key for encryption – When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed key for
Amazon RDS to encrypt your DB instance. If you don't specify the key identifier for a customer managed key, Amazon RDS uses the AWS managed
key for your new DB instance. Amazon RDS creates an AWS managed key for Amazon RDS for your AWS account.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-postgresql-db-instance.html
upvoted 2 times
5 months, 1 week ago
The correct answer is C,
Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS)
Restore encrypted snapshot to an existing DB instance.
This is the correct approach as it allows you to encrypt the existing snapshots and the existing DB instance using AWS KMS. This way, you can
ensure that all data stored in the DB instance and the snapshots are encrypted at rest, providing an additional layer of security.
upvoted 1 times
2 months, 3 weeks ago
BUT you can't restore encrypted snapshot to an existing DB instance.Only no NEW DB (not an existing one). The procedure described in this
way:
"(...) you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of
that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance."
refers to create a NEW DB instance (this encrypted), never restoring in a existing one.
The RDB engine understands that restoring from a encrypted snapshot is form create an encrypted NEW database.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS)
managed keys (SSE-KMS).
This option ensures that the database snapshots are encrypted at rest by copying them to an S3 bucket that is encrypted using SSE-KMS. This
option also provides the flexibility to restore the snapshots to a new RDS DB instance in the future, which will also be encrypted by default.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
If C means doing encryption while making snapshot, then it is incorrect. It is not able to make an encrypted snapshot from unencrypted RDS. But it
will be correct if it means enabling KMS function when restoring DB instance. Bad in wordings.
upvoted 1 times
5 months, 2 weeks ago
The correct answer is A. Check this link " https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-for-
postgresql-db-instance.html "
" However, you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted
copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance".
upvoted 1 times
Topic 1
Question #122
A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?
A. Use multi-factor authentication (MFA) to protect the encryption keys.
B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
C. Use AWS Certi cate Manager (ACM) to create, store, and assign the encryption keys.
D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys.
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
If you are a developer who needs to digitally sign or verify data using asymmetric keys, you should use the service to create and manage the
private keys you’ll need. If you’re looking for a scalable key management infrastructure to support your developers and their growing number of
applications, you should use it to reduce your licensing costs and operational burden...
https://aws.amazon.com/kms/faqs/#:~:text=If%20you%20are%20a%20developer%20who%20needs%20to%20digitally,a%20broad%20set%20of%2
0industry%20and%20regional%20compliance%20regimes.
upvoted 16 times
7 months ago
Most documented answers. Thank you, 123jhl0.
upvoted 2 times
Most Recent
6 days, 9 hours ago
Selected Answer: B
By utilizing AWS KMS, the company can offload the operational responsibilities of key management, including key generation, rotation, and
protection. AWS KMS provides a scalable and secure infrastructure for managing encryption keys, allowing developers to easily integrate
encryption into their applications without the need to manage the underlying key infrastructure.
Option A (MFA), option C (ACM), and option D (IAM policy) are not directly related to reducing the operational burden of key management. While
these options may provide additional security measures or access controls, they do not specifically address the scalability and management aspects
of a key management infrastructure. AWS KMS is designed to simplify the key management process and is the most suitable option for reducing
the operational burden in this scenario.
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: B
B is correct.
upvoted 1 times
6 months ago
Selected Answer: B
The correct answer is Option B. To reduce the operational burden, the solutions architect should use AWS Key Management Service (AWS KMS) to
protect the encryption keys.
AWS KMS is a fully managed service that makes it easy to create and manage encryption keys. It allows developers to easily encrypt and decrypt
data in their applications, and it automatically handles the underlying key management tasks, such as key generation, key rotation, and key
deletion. This can help to reduce the operational burden associated with key management.
upvoted 4 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
Community vote distribution
B (100%)
7 months, 2 weeks ago
Selected Answer: B
If you are responsible for securing your data across AWS services, you should use it to centrally manage the encryption keys that control access to
your data. If you are a developer who needs to encrypt data in your applications, you should use the AWS Encryption SDK with AWS KMS to easily
generate, use and protect symmetric encryption keys in your code.
upvoted 2 times
Topic 1
Question #123
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certi cate, which is on each
instance to perform SSL termination.
There has been an increase in tra c recently, and the operations team determined that SSL encryption and decryption is causing the compute
capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application's performance?
A. Create a new SSL certi cate using AWS Certi cate Manager (ACM). Install the ACM certi cate on each instance.
B. Create an Amazon S3 bucket Migrate the SSL certi cate to the S3 bucket. Con gure the EC2 instances to reference the bucket for SSL
termination.
C. Create another EC2 instance as a proxy server. Migrate the SSL certi cate to the new instance and con gure it to direct connections to the
existing EC2 instances.
D. Import the SSL certi cate into AWS Certi cate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the
SSL certi cate from ACM.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
This issue is solved by SSL offloading, i.e. by moving the SSL termination task to the ALB.
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/
upvoted 11 times
Most Recent
6 days, 9 hours ago
Selected Answer: D
By using ACM to manage the SSL certificate and configuring an ALB with HTTPS listener, the SSL termination will be handled by the load balancer
instead of the web servers. This offloading of SSL processing to the ALB reduces the compute capacity burden on the web servers and improves
their performance by allowing them to focus on serving the dynamic web application.
Option A suggests creating a new SSL certificate using ACM, but it does not address the SSL termination offloading and load balancing capabilities
provided by an ALB.
Option B suggests migrating the SSL certificate to an S3 bucket, but this approach does not provide the necessary SSL termination and load
balancing functionalities.
Option C suggests creating another EC2 instance as a proxy server, but this adds unnecessary complexity and management overhead without
leveraging the benefits of ALB's built-in load balancing and SSL termination capabilities.
Therefore, option D is the most suitable choice to increase the application's performance in this scenario.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: A
Why is A wrong?
upvoted 2 times
1 month, 3 weeks ago
Company uses its own SSL certificate. Option A says.. Create a SSL certificate in ACM
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
SSL termination is the process of ending an SSL/TLS connection. This is typically done by a device, such as a load balancer or a reverse proxy, that is
positioned in front of one or more web servers. The device decrypts incoming SSL/TLS traffic and then forwards the unencrypted request to the
web server. This allows the web server to process the request without the overhead of decrypting and encrypting the traffic. The device then re-
encrypts the response from the web server and sends it back to the client. This allows the device to offload the SSL/TLS processing from the web
servers and also allows for features such as SSL offloading, SSL bridging, and SSL acceleration.
upvoted 4 times
6 months ago
Selected Answer: D
Community vote distribution
D (92%)
8%
The correct answer is D. To increase the application's performance, the solutions architect should import the SSL certificate into AWS Certificate
Manager (ACM) and create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
An Application Load Balancer (ALB) can offload the SSL termination process from the EC2 instances, which can help to increase the compute
capacity available for the web application. By creating an ALB with an HTTPS listener and using the SSL certificate from ACM, the ALB can handle
the SSL termination process, leaving the EC2 instances free to focus on running the web application.
upvoted 4 times
6 months, 1 week ago
Selected Answer: D
Option D to offload the SSL encryption workload
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: D
Due to this statement particularly: "The company has its own SSL certificate" as it's not created from AWS ACM itself.
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
8 months ago
Selected Answer: D
agree with D
upvoted 1 times
Topic 1
Question #124
A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be
started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has
asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.
What should the solutions architect recommend?
A. Implement EC2 Spot Instances.
B. Purchase EC2 Reserved Instances.
C. Implement EC2 On-Demand Instances.
D. Implement the processing on AWS Lambda.
Correct Answer:
A
Highly Voted
7 months ago
Selected Answer: A
Cant be implemented on Lambda because the timeout for Lambda is 15mins and the Job takes 60minutes to complete
Answer >> A
upvoted 11 times
Highly Voted
8 months, 1 week ago
spot instances
upvoted 5 times
Most Recent
5 days, 17 hours ago
Selected Answer: A
Spot Instances provide significant cost savings for flexible start and stop batch jobs.
Purchasing Reserved Instances (B) is better for stable workloads, not dynamic ones.
On-Demand Instances (C) are costly and lack potential cost savings like Spot Instances.
AWS Lambda (D) is not suitable for long-running batch jobs.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: A
A is correct
upvoted 1 times
3 months ago
Answer A:
typically takes upwards of 60 minutes total to complete.
upvoted 1 times
6 months ago
Selected Answer: A
The correct answer is Option A. To design a scalable and cost-effective solution for the batch processing job, the solutions architect should
recommend implementing EC2 Spot Instances.
EC2 Spot Instances allow users to bid on spare Amazon EC2 computing capacity and can be a cost-effective solution for stateless, interruptible
workloads that can be started and stopped at any time. Since the batch processing job is stateless, can be started and stopped at any time, and
typically takes upwards of 60 minutes to complete, EC2 Spot Instances would be a good fit for this workload.
upvoted 2 times
6 months ago
Selected Answer: A
Spot Instances should be good enough and cost effective because the job can be started and stopped at any given time with no negative impact.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
Community vote distribution
A (100%)
7 months, 1 week ago
A is correct
upvoted 1 times
8 months ago
Selected Answer: A
A is the answer
upvoted 1 times
Topic 1
Question #125
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends tra c to Amazon EC2 instances. The
database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The
EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly
available.
Which combination of con guration options will meet these requirements? (Choose two.)
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
B. Con gure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the
private subnets.
C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance
in private subnets.
D. Con gure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnet.
D. Con gure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application
Load Balancer in the public subnets.
Correct Answer:
CE
Highly Voted
7 months, 2 weeks ago
Selected Answer: AD
Answer A for: The EC2 instances and the RDS DB instance should not be exposed to the public internet. Answer D for: The EC2 instances require
internet access to complete payment processing of orders through a third-party web service. Answer A for: The application must be highly
available.
upvoted 19 times
1 month, 3 weeks ago
why not option B.The EC2 instances can be launched in private subnets across two Availability Zones, and the Application Load Balancer can be
deployed in the private subnets. NAT gateways can be configured in each private subnet to provide internet access for the EC2 instances to
communicate with the third-party web service.
upvoted 1 times
1 month, 1 week ago
B option wrong! NAT gateways must be created in public subnets!!
upvoted 1 times
7 months, 1 week ago
We will require 2 private subnets, D does mention 1 subnet
upvoted 3 times
Highly Voted
5 months, 3 weeks ago
A and E!
Application has to be highly available while the instance and database should not be exposed to the public internet, but the instances still requires
access to the internet. NAT gateway has to be deployed in public subnets in this case while instances and database remain in private subnets in the
VPC, therefore answer is (A) and (E).
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
If the instances did not require access to the internet, then the answer could have been
(B) to use a private NAT gateway and keep it in the private subnets to communicate only to the VPCs.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
upvoted 9 times
2 months, 1 week ago
your link is right but your voting is wrong, should be AD, although that still doesnt explain why 2 NAT gateways
upvoted 2 times
Most Recent
2 days, 12 hours ago
Selected Answer: AD
Answer A for: The EC2 instances and the RDS DB instance should not be exposed to the public internet. Answer D for: The EC2 instances require
internet access to complete payment processing of orders through a third-party web service. Answer A for: The application must be highly
available.
Community vote distribution
AD (49%)
A (25%)
AB (23%)
upvoted 1 times
5 days, 17 hours ago
Selected Answer: AD
Option D configures a VPC with a public subnet for the web tier, allowing customers to access the website. The private subnet provides a secure
environment for the EC2 instances and the RDS DB instance. NAT gateways are used to provide internet access to the EC2 instances in the private
subnet for payment processing.
Option A uses an Auto Scaling group to launch the EC2 instances in private subnets, ensuring they are not directly accessible from the public
internet. The RDS Multi-AZ DB instance is also placed in private subnets, maintaining security.
upvoted 1 times
1 week, 6 days ago
Selected Answer: AD
Second D so like E.
upvoted 1 times
2 weeks, 6 days ago
Selected Answer: CD
I had it as AD, but for me the question asked for high availability, and A doesn't specify across availability zones. So, A is more secure but not highly
available. C is less secure but highly available
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: AD
AD because 2 NAT gateways in 2 public subnets in 2 AZs.
upvoted 1 times
1 month, 1 week ago
Selected Answer: CD
C - provide required HA
E - Best answer to the access requirements. The NAT gateway is required for the EC2 instances to access the third-party web service. This do not
expose them for inbound connections from Internet.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: AD
A & the 2nd D. You have to put each NAT gateway in each public subnet
upvoted 2 times
1 month, 4 weeks ago
Selected Answer: AD
A and the second D are the correct choices. ALB in the public subnet for access from the internet. NAT gateways and the EC2s in the private subnet
over 2 AZs to meet the requirements.
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
D. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application Load
Balancer in the public subnets.
upvoted 1 times
2 months ago
Selected Answer: AD
AE
Option B is not a valid solution as it only includes private subnets, and both the NAT gateway and Application Load Balancer require public subnets.
upvoted 1 times
2 months ago
Selected Answer: AB
In option B, an Application Load Balancer (ALB) is deployed in the private subnets, and two NAT gateways are configured across two Availability
Zones to provide internet access to the instances in the private subnets. This allows the web tier to be accessed publicly through the ALB while still
keeping the instances in private subnets. The NAT gateways act as a proxy between the instances and the internet, allowing only necessary traffic to
pass through while blocking all other inbound traffic. This configuration provides additional security to the application by keeping the instances in
private subnets and minimizing the exposure of the infrastructure to the public internet
upvoted 2 times
2 months, 1 week ago
Selected Answer: AB
private subnets, meaning C D E are not
upvoted 2 times
2 months, 1 week ago
my bad, only RDS are private
upvoted 1 times
2 months, 2 weeks ago
None of the answers provided ensures internet connectivity. NAT gateway alone doesnt provide internet access, it needs internet gateway. Also,
once you have NAT and IGW, you need to edit route tables and then you get internet access
upvoted 1 times
3 months ago
Answer AE:
upvoted 1 times
3 months, 1 week ago
https://docs.aws.amazon.com/prescriptive-guidance/latest/load-balancer-stickiness/subnets-routing.html ALB should be in Public Subnet
upvoted 1 times
3 months, 3 weeks ago
A&D
ALb associated with public subnets and the route table configured for local traffic flow.
NAT gateways allow for internet connectivity for EC2 instances
upvoted 1 times
Topic 1
Question #126
A solutions architect needs to implement a solution to reduce a company's storage costs. All the company's data is in the Amazon S3 Standard
storage class. The company must keep all data for at least 25 years. Data from the most recent 2 years must be highly available and immediately
retrievable.
Which solution will meet these requirements?
A. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive immediately.
B. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.
D. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately and to S3 Glacier Deep
Archive after 2 years.
Correct Answer:
B
Highly Voted
7 months, 1 week ago
Selected Answer: B
Why Not C? Because Intelligent Tier the objects are automatically moved to different tiers.
The question says "the data from most recent 2 yrs should be highly available and immediately retrievable", which means in intelligent tier , if you
activate archiving option(as Option C specifies) , the objects will be moved to Archive tiers(instant to access to deep archive access tiers) in 90 to
730 days. Remember these archive tiers performance will be similar to S3 glacier flexible and s3 deep archive which means files cannot be retrieved
immediately within 2 yrs .
We have a hard requirement in question which says it should be retreivable immediately for the 2 yrs. which cannot be acheived in Intelligent tier.
So B is the correct option imho.
Because of the above reason Its possible only in S3 standard and then configure lifecycle configuration to move to S3 Glacier Deep Archive after 2
yrs.
upvoted 9 times
Highly Voted
7 months, 1 week ago
Selected Answer: B
B is the only right answer. C does not indicate archiving after 2 years. If it did specify 2 years, then C would also be an option.
upvoted 7 times
Most Recent
5 days, 17 hours ago
Selected Answer: B
Option A is incorrect because immediately transitioning objects to S3 Glacier Deep Archive would not fulfill the requirement of keeping the most
recent 2 years of data highly available and immediately retrievable.
Option C is also incorrect because using S3 Intelligent-Tiering with archiving option would not meet the requirement of immediately retrievable
data for the most recent 2 years.
Option D is not the best choice because transitioning objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) and then to S3 Glacier Deep
Archive would not satisfy the requirement of immediately retrievable data for the most recent 2 years.
Option B is the correct solution. By setting up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years, the company can
keep all data for at least 25 years while ensuring that data from the most recent 2 years remains highly available and immediately retrievable in the
Amazon S3 Standard storage class. This solution optimizes storage costs by leveraging the Glacier Deep Archive for long-term storage.
upvoted 1 times
1 week, 4 days ago
Why not D
upvoted 2 times
2 months, 1 week ago
Selected Answer: B
B is the only one possible.
upvoted 1 times
2 months, 2 weeks ago
C would not work as the names of these S3 archives are called Archive Access Tier and Deep Archive access tiers, so since they mention glacier in
option C , I think its B which is the correct.
upvoted 1 times
Community vote distribution
B (75%)
C (19%)
6%
4 months, 3 weeks ago
It's pretty straight forward.
S3 Standard answers for High Availaibility/Immediate retrieval for 2 years. S3 Intelligent Tiering would just incur additional cost of analysis while the
company insures that it requires immediate retrieval in any moment and without risk to Availability. So a capital B
upvoted 2 times
5 months ago
C appears to be appropriate - good case for intelligent tiering
upvoted 1 times
2 months, 1 week ago
The option just says Intelligent Tiering, it doesn't specify when it would transition the date to Deep Archive, so how do we know it would do it at
the correct time? It has to be A.
upvoted 1 times
3 months, 3 weeks ago
Intelligent tiering appears to be best suited for unknown usage pattern.. but with a known usage pattern Life cycle policy may be optimal.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.
S3 Intelligent Tiering supports changing the default archival time to 730 days (2 years) from the default 90 or 180 days. Other levels of tiering are
instant access tiers.
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: D
Option D is the correct solution for this scenario.
S3 Lifecycle policies allow you to automatically transition objects to different storage classes based on the age of the object or other specific
criteria. In this case, the company needs to keep all data for at least 25 years, and the data from the most recent 2 years must be highly available
and immediately retrievable.
upvoted 2 times
5 months, 2 weeks ago
If the option for D was Infrequent Access it would be good, but here it is One Zone-IA which is not highly available. Then it must be B
upvoted 5 times
5 months, 4 weeks ago
Option A is not a good solution because it would transition all objects to S3 Glacier Deep Archive immediately, making the data from the most
recent 2 years not immediately retrievable. Option B is not a good solution because it would not make the data from the most recent 2 years
immediately retrievable.
Option C is not a good solution because S3 Intelligent-Tiering is designed to automatically move objects between two storage classes (Standard
and Infrequent Access) based on object access patterns. It does not provide a way to transition objects to S3 Glacier Deep Archive, which is
required for long-term storage.
Option D is the correct solution because it would transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately, making
the data from the most recent 2 years immediately retrievable. After 2 years, the objects would be transitioned to S3 Glacier Deep Archive for
long-term storage. This solution meets the requirements of the company to keep all data for at least 25 years and make the data from the most
recent 2 years immediately retrievable.
upvoted 1 times
5 months, 1 week ago
B is immediately retrievable, has high availability and using the lifecycle you can transition to deep archive after the 2 years time period.
upvoted 1 times
5 months, 2 weeks ago
S3 One Zone-IA is not highly available compared with S3 standard
https://aws.amazon.com/about-aws/whats-new/2018/04/announcing-s3-one-zone-infrequent-access-a-new-amazon-s3-storage-class/?
nc1=h_ls
upvoted 1 times
6 months ago
Selected Answer: B
B looks correct
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
B. Most correct
upvoted 2 times
7 months ago
Selected Answer: C
https://aws.amazon.com/blogs/aws/s3-intelligent-tiering-adds-archive-access-tiers/
upvoted 1 times
6 months, 1 week ago
From your link "We added S3 Intelligent-Tiering to Amazon Amazon S3 to solve the problem of using the right storage class and optimizing
costs when access patterns are irregular.". But access patterns are not irregular, they are clearly stated on the question, so this is not required.
upvoted 3 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
C - S3 Intelligent-Tiering
Customers saving on storage with S3 Intelligent-Tiering
S3 Intelligent-Tiering automatically stores objects in three access tiers: one tier optimized for frequent access, a lower-cost tier optimized for
infrequent access, and a very-low-cost tier optimized for rarely accessed data. For a small monthly object monitoring and automation charge, S3
Intelligent-Tiering moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier for savings of 40%; and after 90
days of no access, they’re
There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller than 128 KB
are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be charged at the Frequent Access tier rates and don’t incur
the monitoring and automation charge
upvoted 1 times
6 months, 1 week ago
"moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier..." This is not required, they should remain
where they are for 2 years.
upvoted 1 times
6 months, 1 week ago
Once you have activated one or both of the archive access tiers, S3 Intelligent-Tiering will automatically move objects that haven’t been
accessed for 90 days to the Archive Access tier, ...Objects in the archive access tiers are retrieved in 3-5 hours!
Yet the requirements are "Data from the most recent 2 years must be highly available and immediately retrievable". Not C!
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
Option C doesn't look correct for me because it is not clear when it will be moved to the Deep Archive. It could be earlier then 2 years, which is not
correct
upvoted 4 times
7 months, 2 weeks ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-
overview.html#:~:text=S3%20Intelligent%2DTiering%20provides%20you,minimum%20of%2090%20consecutive%20days. Option B / S3 Glacier
Deep Archive seems correct to reduce a company's storage costs.
upvoted 1 times
Topic 1
Question #127
A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the
maximum possible I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet
requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?
A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
B. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: D
Max instance store possible at this time is 30TB for NVMe which has the higher I/O compared to EBS.
is4gen.8xlarge 4 x 7,500 GB (30 TB) NVMe SSD
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
upvoted 19 times
1 month, 3 weeks ago
Update: i3en.metal and i3en.24xlarge = 8 x 7500 GB (60TB)
upvoted 2 times
6 months, 1 week ago
instance store volume for the root volume, the size of this volume varies by AMI, but the maximum size is 10 GB
upvoted 1 times
6 months, 1 week ago
This link shows a max capacity of 30TB, so what is the problem?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
upvoted 1 times
6 months, 1 week ago
Only the following instance types support an instance store volume as the root device: C3, D2, G2, I2, M3, and R3, and we're using an I3,
so an instance store volume is irrelevant.
upvoted 2 times
3 weeks, 6 days ago
THE CORRECT ANSWER IS A.
The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 1 times
Highly Voted
8 months ago
Selected Answer: D
agree with D, since it is only used for video processing instance store should be the fastest here (being ephemeral shouldnt be an issue because
they are moving the data to S3 after processing)
upvoted 7 times
Most Recent
5 days, 16 hours ago
Selected Answer: D
Option D is the recommended solution. Amazon EC2 instance store provides maximum performance for video processing, offering local, high-
speed storage that is directly attached to the EC2 instances. Amazon S3 is suitable for durable data storage, providing the required capacity of 300
TB for storing media content. Amazon S3 Glacier serves as a cost-effective solution for archival storage, meeting the requirement of 900 TB of
archival media storage.
Option A suggests using Amazon EBS for maximum performance, but it may not deliver the same level of performance as instance store for I/O-
intensive workloads.
Community vote distribution
D (69%)
A (31%)
Option B recommends Amazon EFS for durable data storage, but it may not provide the required performance for video processing.
Option C suggests using Amazon EC2 instance store for maximum performance and Amazon EFS for durable data storage, but instance store may
not offer the durability and scalability required for the storage needs of the media company.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: A
THE CORRECT ANSWER IS A.
The biggest Instance Store Storage Optimized option (is4gen.8xlarge) has a capacity of only 3TB.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-volumes.html#instance-store-vol-so
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: D
In terms of speed, instance store can generally offer higher I/O performance and lower latency than EBS, due to the fact that it is physically
attached to the host. However, the performance of EBS can be optimized based on the specific use case, by selecting the appropriate volume type,
size, and configuration.
upvoted 2 times
2 months ago
Selected Answer: D
INstance store gives the best I/O performance
upvoted 1 times
2 months, 1 week ago
The keyword here is "maximum possible I/O performance".
EBS and Ec2 instance store are good options, but EC2 is higher than EBS in terms of I/O performance. Maximum possible is clearly Ec2 instance
storage.
There are some concerns about the 10TB needed, however, storage optimized Ec2 instance stores can take up to 24 x 13980 GB (ie 312 TB)
So option D is the winner here.
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: D
D of course
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
The instance-storage is a block storage directly attached to the EC2 instance (also has options to be accelerated with fast NVMe (Non-Volatile
Memory Express) interface) is ins FASTER than EBS.
Also there're types that reach top value of 30 TB.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
La opción A es la más adecuada para cumplir con los requisitos establecidos por la empresa de medios. Amazon EBS ofrece el máximo rendimiento
de E/S posible y es una opción adecuada para el procesamiento de video, mientras que Amazon S3 es la solución de almacenamiento de datos
duradero que puede manejar 300 TB de contenido multimedia. Amazon S3 Glacier es una opción adecuada para el almacenamiento de archivos de
medios de archivo que ya no están en uso, y su costo es más bajo que el de Amazon S3. Por lo tanto, la opción A proporcionará la solución de
almacenamiento más adecuada para la empresa de medios con una combinación de alto rendimiento, durabilidad y costo eficacia
upvoted 2 times
3 months ago
Instance store backed Instances can't be upgraded; means volumes can be added only at the time of launching. If Instance is accidentally
terminated or stopped, all the data is lost. In order to prevent that unto some extent, we need to back up data from Instance store volumes to
persistent storage on a regular basis. So, if we are spending more money on Instance store volume and still we have additional responsibility of
backing them up on regular basis; no worth. We can use EBS volume type that can provide higher I/O performance.
upvoted 1 times
3 months, 1 week ago
When you want to compare S3 storage and EBS as durable storage types according to the maximum IOPS, you will see that s3 is better than EBS
based on storage-optimized values.
Exp: Whereas EBS has 40000 max IOPS for storage-optimized value, EC2 provides you a better option with a max of 2146664 random read and
1073336 write.
To get further information, you can visit the below links:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html#compute-ssd-perf
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html
So my answer is D
upvoted 2 times
3 months, 3 weeks ago
Selected Answer: D
Instance store for max I/O, S3 for durable storage and Glacier for archival
upvoted 1 times
4 months ago
Selected Answer: A
The issue with using an instance store that size seems to be you have to have a specific ami, but paying for an 8xlarge for those extra IO will
normally not be a good solution, the question is open as to compute requirments and cost isn't mentioned
upvoted 1 times
4 months ago
Selected Answer: D
for valuable, long-term data. Instead, use more durable data storage, such as Amazon S3, Amazon EBS, or Amazon EFS.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
upvoted 1 times
4 months, 1 week ago
---Chat GTP-----
There are several Amazon EC2 instance types that support 30 TB of instance store volume storage. The specific instance types available may vary
depending on the AWS region. Here are a few examples of EC2 instance types that support 30 TB of instance store:
i3en.24xlarge: This instance type is part of the I3en family of instances and provides 24 vCPUs, 96 GiB of memory, and 30.5 TB of NVMe SSD
instance store. It is optimized for high-performance workloads and applications that require large amounts of storage, such as data warehousing,
Hadoop, and NoSQL databases.
upvoted 1 times
5 months ago
Selected Answer: A
A & D looks most close. But in question it never gives a clue for temporary storage as AWS EC2 instance store is " An instance store provides
temporary block-level storage for your instance" Hence I will choose A as per my understanding. Pls correct if I am wrong.
Ref#https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
upvoted 4 times
Topic 1
Question #128
A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the
underlying infrastructure. The company needs a solution that minimizes cost and operational overhead.
What should a solutions architect do to meet these requirements?
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Correct Answer:
A
Highly Voted
8 months, 2 weeks ago
Selected Answer: B
it should be B:
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-eks-support-ec2-spot-instances-managed-node-groups/
upvoted 5 times
Highly Voted
6 months, 1 week ago
Running your Kubernetes and containerized workloads on Amazon EC2 Spot Instances is a great way to save costs. ... AWS makes it easy to run
Kubernetes with Amazon Elastic Kubernetes Service (EKS) a managed Kubernetes service to run production-grade workloads on AWS. To cost
optimize these workloads, run them on Spot Instances. https://aws.amazon.com/blogs/compute/cost-optimization-and-resilience-eks-with-spot-
instances/
upvoted 5 times
Most Recent
5 days, 16 hours ago
Selected Answer: B
Option B is the recommended solution. Using Spot Instances within an Amazon EKS managed node group allows you to run containers in a
managed Kubernetes environment while taking advantage of the cost savings offered by Spot Instances. Spot Instances provide access to spare
EC2 capacity at significantly lower prices than On-Demand Instances. By utilizing Spot Instances in an EKS managed node group, you can reduce
costs while maintaining high availability for your stateless applications.
Option A suggests using Spot Instances in an EC2 Auto Scaling group, which is a valid approach. However, utilizing Amazon EKS provides a more
streamlined and managed environment for running containers.
Options C and D suggest using On-Demand Instances, which would provide stable capacity but may not be the most cost-effective solution for
minimizing costs, as On-Demand Instances typically have higher prices compared to Spot Instances.
upvoted 1 times
1 month ago
There are no additional costs to use Amazon EKS managed node groups. You only pay for the AWS resources that you provision.
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: A
Requirement is "minimizes cost and operational overhead"
A is better option than B as EKS add additional cost and operational overhead.
upvoted 4 times
3 weeks, 3 days ago
USING SPOT INSTANCES WITH EKS
https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks.html
upvoted 1 times
1 month, 1 week ago
option A is the worst option in terms of operational overhead ... you have to install your own kubernetes cluster!!! B is a more suitable option
upvoted 3 times
2 months, 3 weeks ago
Selected Answer: B
La opción B es la mejor para cumplir con los requisitos de minimización de costos y gastos generales operativos mientras se ejecutan contenedores
en la nube de AWS. Amazon EKS es un servicio de orquestación de contenedores altamente escalable y de alta disponibilidad que se encarga de
administrar y escalar automáticamente los nodos de contenedor subyacentes. El uso de instancias de spot en un grupo de nodos administrados de
Community vote distribution
B (84%)
Other
Amazon EKS ayudará a reducir los costos en comparación con las instancias bajo demanda, ya que las instancias de spot son instancias de EC2
disponibles a precios significativamente más bajos, pero pueden ser interrumpidas con poco aviso. Al aprovechar la capacidad no utilizada de EC2
a un precio reducido, la empresa puede ahorrar dinero en costos de infraestructura sin comprometer la tolerancia a fallos o la escalabilidad de sus
aplicaciones en contenedores.
upvoted 2 times
3 months ago
B: Sport instance save cost
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
The answer should be D. Spot instance is not good option at all. The question say "...can tolerate disruptions" this doesn't mean it can run at
random time intervals.
upvoted 1 times
3 weeks, 3 days ago
USING SPOT INSTANCES WITH EKS
https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks.html
upvoted 1 times
2 months, 1 week ago
Spot instances are the correct option for this case.
upvoted 1 times
3 months, 3 weeks ago
Answer is A:
Amazon ECS: ECS itself is free, you pay only for Amazon EC2 resources you use.
Amazon EKS: The EKS management layer incurs an additional cost of $144 per month per cluster.
Advantages of Amazon ECS include: Spot instances: Because containers are immutable, you can run many workloads using Amazon EC2 Spot
Instances (which can be shut down with no advance notice) and save 90% on on-demand instance costs.
upvoted 5 times
3 months, 3 weeks ago
Selected Answer: B
Spot instances for cost optimisation and Kubernetes for container management
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: B
A and B are working. but requirements have "operational overhead". EKS would allow the company to use Amazon EKS to manage the
containerized applications.
upvoted 4 times
6 months ago
Selected Answer: B
The correct answer is B. To minimize cost and operational overhead, the solutions architect should use Spot Instances in an Amazon Elastic
Kubernetes Service (Amazon EKS) managed node group to run the application containers.
Amazon EKS is a fully managed service that makes it easy to run Kubernetes on AWS. By using a managed node group, the company can take
advantage of the operational benefits of Amazon EKS while minimizing the operational overhead of managing the Kubernetes infrastructure. Spot
Instances provide a cost-effective way to run stateless, fault-tolerant applications in containers, making them a good fit for the company's
requirements.
upvoted 5 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
B. Use Spot Instances - Supports Disruption ( stop and start at anytime)
Elastic Kubernetes Service (Amazon EKS) managed node group - Supports containerized application.
upvoted 1 times
6 months, 3 weeks ago
why not A, EC2 can run container with lower cost than EKS...
upvoted 3 times
6 months, 1 week ago
There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS resources you provision, so I disagree
upvoted 2 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: B
This should explain
https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
upvoted 4 times
Topic 1
Question #129
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts
connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning
is limiting the company's growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Migrate the PostgreSQL database to Amazon Aurora.
B. Migrate the web application to be hosted on Amazon EC2 instances.
C. Set up an Amazon CloudFront distribution for the web application content.
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Correct Answer:
AE
Highly Voted
7 months, 3 weeks ago
Selected Answer: AE
I would say A and E since Aurora and Fargate are serverless (less operational overhead).
upvoted 7 times
Most Recent
5 days, 16 hours ago
Selected Answer: AE
A is the correct answer because migrating the database to Amazon Aurora reduces operational overhead and offers scalability and automated
backups.
E is the correct answer because migrating the web application to AWS Fargate with Amazon ECS eliminates the need for infrastructure
management, simplifies deployment, and improves resource utilization.
B. Migrating the web application to Amazon EC2 instances would not directly address the operational overhead and capacity planning concerns
mentioned in the scenario.
C. Setting up an Amazon CloudFront distribution improves content delivery but does not directly address the operational overhead or capacity
planning limitations.
D. Configuring Amazon ElastiCache improves performance but does not directly address the operational overhead or capacity planning challenges
mentioned.
Therefore, the correct answers are A and E as they address the requirements, while the incorrect answers (B, C, D) do not provide the desired
solutions.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: AE
Improve the application's infrastructure = Modernize Infrastructure = Least Operational Overhead = Serverless
upvoted 1 times
2 months, 1 week ago
Selected Answer: AE
A and E are the best options.
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: AE
A and E
upvoted 1 times
5 months, 1 week ago
Selected Answer: AE
a e..............
upvoted 1 times
5 months, 2 weeks ago
Community vote distribution
AE (95%)
5%
One should that Aurora is not serverless. Aurora serverless and Aurora are 2 Amazon services. I prefer C, however the question does not mention
any frontend requirements.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: AE
Yes, go for A and E since thes two ressources are serverless.
upvoted 2 times
6 months ago
Selected Answer: AE
The correct answers are A and E. To improve the application's infrastructure, the solutions architect should migrate the PostgreSQL database to
Amazon Aurora and migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Amazon Aurora is a fully managed, scalable, and highly available relational database service that is compatible with PostgreSQL. Migrating the
database to Amazon Aurora would reduce the operational overhead of maintaining the database infrastructure and allow the company to focus on
building and scaling the application.
AWS Fargate is a fully managed container orchestration service that enables users to run containers without the need to manage the underlying
EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can improve the scalability and
efficiency of the web application and reduce the operational overhead of maintaining the underlying infrastructure.
upvoted 1 times
6 months ago
A and E are obvious choices.
upvoted 1 times
6 months, 1 week ago
Selected Answer: AE
Option A and E
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: AE
A and E
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: CE
C not A. and E
upvoted 1 times
7 months, 1 week ago
A and E
upvoted 1 times
7 months, 3 weeks ago
https://www.examtopics.com/discussions/amazon/view/46457-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
7 months, 3 weeks ago
A and E
Aurora and serverless
upvoted 1 times
8 months ago
Selected Answer: AE
B(X) E(O) not sure about A,C,D but A looks making sense
upvoted 1 times
Topic 1
Question #130
An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in an Amazon EC2 Auto Scaling group behind
an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Correct Answer:
B
Highly Voted
6 months ago
Selected Answer: B
The correct answer is B. To maintain the desired performance across all instances in the Amazon EC2 Auto Scaling group, the solutions architect
should use a target tracking policy to dynamically scale the Auto Scaling group.
A target tracking policy allows the Auto Scaling group to automatically adjust the number of EC2 instances in the group based on a target value for
a metric. In this case, the target value for the CPU utilization metric could be set to 40% to maintain the desired performance of the application.
The Auto Scaling group would then automatically scale the number of instances up or down as needed to maintain the target value for the metric.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
upvoted 6 times
Most Recent
5 days, 16 hours ago
Selected Answer: B
Target tracking policy is the most appropriate choice. This policy allows ASG to automatically adjust the desired capacity based on a target metric,
such as CPU utilization. By setting the target metric to 40%, ASG will scale the number of instances up or down as needed to maintain the desired
CPU utilization level. This ensures that the application's performance remains optimal.
A suggests using a simple scaling policy, which allows for scaling based on a fixed metric or threshold. However, it may not be as effective as a
target tracking policy in dynamically adjusting the capacity to maintain a specific CPU utilization level.
C suggests using an Lambda to update the desired capacity. While this can be done programmatically, it would require custom scripting and may
not provide the same level of automation and responsiveness as a target tracking policy.
D suggests using scheduled scaling actions to scale up and down ASG at predefined times. This approach is not suitable for maintaining the
desired performance in real-time based on actual CPU utilization.
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
B of course.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
B seem to the correct response.
With a target tracking scaling policy, you can increase or decrease the current capacity of the group based on a target value for a specific metric.
This policy will help resolve the over-provisioning of your resources. The scaling policy adds or removes capacity as required to keep the metric at,
or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to
changes in the metric due to a changing load pattern.
upvoted 3 times
6 months ago
Selected Answer: B
target tracking - CPU at 40%
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B
Community vote distribution
B (100%)
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
Option B. Target tracking policy.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
upvoted 4 times
7 months, 3 weeks ago
B
CPU utilization = target tracking
upvoted 2 times
8 months ago
Selected Answer: B
B is the answer
upvoted 1 times
Topic 1
Question #131
A company is developing a le-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the les
through an Amazon CloudFront distribution. The company does not want the les to be accessible through direct navigation to the S3 URL.
What should a solutions architect do to meet these requirements?
A. Write individual policies for each S3 bucket to grant read permission for only CloudFront access.
B. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront.
C. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon
Resource Name (ARN).
D. Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Con gure the S3 bucket permissions so that only the
OAI has read permission.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
I want to restrict access to my Amazon Simple Storage Service (Amazon S3) bucket so that objects can be accessed only through my Amazon
CloudFront distribution. How can I do that?
Create a CloudFront origin access identity (OAI)
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
upvoted 22 times
8 months ago
Thanks it convinces me
upvoted 1 times
Most Recent
5 days, 16 hours ago
Selected Answer: D
To meet the requirements of serving files through CloudFront while restricting direct access to the S3 bucket URL, the recommended approach is to
use an origin access identity (OAI). By creating an OAI and assigning it to the CloudFront distribution, you can control access to the S3 bucket.
This setup ensures that the files stored in the S3 bucket are only accessible through CloudFront and not directly through the S3 bucket URL.
Requests made directly to the S3 URL will be blocked.
Option A suggests writing individual policies for each S3 bucket, which can be cumbersome and difficult to manage, especially if there are multiple
buckets involved.
Option B suggests creating an IAM user and assigning it to CloudFront, but this does not address restricting direct access to the S3 bucket URL.
Option C suggests writing an S3 bucket policy with CloudFront distribution ID as the Principal, but this alone does not provide the necessary
restrictions to prevent direct access to the S3 bucket URL.
upvoted 1 times
3 weeks, 6 days ago
DECEMBER 2022 UPDATE:
Restricting access to an Amazon S3 origin:
CloudFront provides two ways to send authenticated requests to an Amazon S3 origin: origin access control (OAC) and origin access identity (OAI).
We recommend using OAC because it supports:
All Amazon S3 buckets in all AWS Regions, including opt-in Regions launched after December 2022
Amazon S3 server-side encryption with AWS KMS (SSE-KMS)
Dynamic requests (PUT and DELETE) to Amazon S3
OAI doesn't work for the scenarios in the preceding list, or it requires extra workarounds in those scenarios.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 1 times
6 months ago
Selected Answer: D
The correct answer is D. To meet the requirements, the solutions architect should create an origin access identity (OAI) and assign it to the
CloudFront distribution. The S3 bucket permissions should be configured so that only the OAI has read permission.
Community vote distribution
D (100%)
An OAI is a special CloudFront user that is associated with a CloudFront distribution and is used to give CloudFront access to the files in an S3
bucket. By using an OAI, the company can serve the files through the CloudFront distribution while preventing direct access to the S3 bucket.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
D is the right answer
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
D is correct but instead of OAI using OAC would be better since OAI is legacy
upvoted 3 times
2 months, 1 week ago
Thanks, I didn't know about OAC.
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
Topic 1
Question #132
A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the
company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the
fastest possible response time.
Which combination should a solutions architect recommend to meet these requirements?
A. Amazon CloudFront and Amazon S3
B. AWS Lambda and Amazon DynamoDB
C. Application Load Balancer with Amazon EC2 Auto Scaling
D. Amazon Route 53 with internal Application Load Balancers
Correct Answer:
A
Highly Voted
5 months ago
Selected Answer: A
Historical reports = Static content = S3
upvoted 8 times
Highly Voted
8 months ago
A is the correct answer
The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time.
upvoted 8 times
Most Recent
5 days, 16 hours ago
By using CloudFront, the website can leverage the global network of edge locations to cache and deliver the performance reports to users from the
nearest edge location, reducing latency and providing fast response times. Amazon S3 serves as the origin for the files, where the reports are
stored.
Option B is incorrect because AWS Lambda and Amazon DynamoDB are not the most suitable services for serving downloadable files and meeting
the website demands globally.
Option C is incorrect because using an Application Load Balancer with Amazon EC2 Auto Scaling may require more infrastructure provisioning and
management compared to the CloudFront and S3 combination. Additionally, it may not provide the same level of global scalability and fast
response times as CloudFront.
Option D is incorrect because while Amazon Route 53 is a global DNS service, it alone does not provide the caching and content delivery
capabilities required for serving the downloadable reports. Internal Application Load Balancers do not address the global scalability and caching
requirements specified in the scenario.
upvoted 1 times
6 months ago
Selected Answer: A
The correct answer is Option A. To meet the requirements, the solutions architect should recommend using Amazon CloudFront and Amazon S3.
By combining Amazon CloudFront and Amazon S3, the solutions architect can provide a scalable and cost-effective solution that limits the
provisioning of infrastructure resources and provides the fastest possible response time.
https://aws.amazon.com/cloudfront/
https://aws.amazon.com/s3/
upvoted 3 times
6 months ago
A is correct
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
A is the best and most cost effective option if only download of the static pre-created report(no data processing before downloading) is a
requirement.
upvoted 1 times
7 months, 1 week ago
A is correct
Community vote distribution
A (90%)
5%
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: A
See this discussion:
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: C
load balancing + scalability + cost effective
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
I think the answer is B
upvoted 1 times
Topic 1
Question #133
A company runs an Oracle database on premises. As part of the company’s migration to AWS, the company wants to upgrade the database to the
most recent available version. The company also wants to set up disaster recovery (DR) for the database. The company needs to minimize the
operational overhead for normal operations and DR setup. The company also needs to maintain access to the database's underlying operating
system.
Which solution will meet these requirements?
A. Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.
B. Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to replicate the snapshots to another
AWS Region.
C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.
D. Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another Availability Zone.
Correct Answer:
D
Highly Voted
7 months, 3 weeks ago
Option C since RDS Custom has access to the underlying OS and it provides less operational overhead. Also, a read replica in another Region can
be used for DR activities.
https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/
upvoted 16 times
1 week, 6 days ago
You can't create cross-Region replicas in RDS Custom for Oracle: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-
rr.html#custom-rr.limitations
upvoted 1 times
Highly Voted
8 months, 2 weeks ago
Selected Answer: C
It should be C:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html
and
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-custom-oracle.html
upvoted 13 times
Most Recent
1 day ago
Selected Answer: A
It requires accessing to the underlying OS , so B/D out. And you can't create cross-Region RDS Custom for Oracle replicas, so C out.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
upvoted 1 times
5 days, 16 hours ago
Selected Answer: C
By choosing Option C, the company can upgrade the Oracle database, leverage the benefits of Amazon RDS, and have a disaster recovery solution
with minimal operational overhead.
Option A suggests migrating the Oracle database to an Amazon EC2 instance and setting up database replication to a different AWS Region. This
approach requires more operational overhead and management compared to using a managed service like Amazon RDS.
Option B suggests migrating the Oracle database to Amazon RDS for Oracle and activating Cross-Region automated backups. While this provides
backups in another AWS Region, it does not provide the same level of disaster recovery and failover capabilities as a read replica in another Region.
Option D suggests migrating the Oracle database to Amazon RDS for Oracle and creating a standby database in another Availability Zone.
However, this solution only provides availability within the same Region and does not meet the requirement of having disaster recovery across
AWS Regions.
upvoted 1 times
1 week, 6 days ago
Selected Answer: A
You can't create cross-Region read replicas for RDS Custom for Oracle. Please do not select C, despite it having the highest community rating on
here.
Community vote distribution
C (61%)
A (28%)
11%
Official article that states this here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html#custom-rr.limitations
So, as access to the OS is needed and RDS Custom is ruled out (which DOES give you access), the answer is clearly A.
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: A
The correct answer is A.
A. Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.
Reasoning: Amazon RDS is a managed service that abstracts some of the lower-level functionality and decisions you would have with a self-
managed database. While RDS provides a lot of convenience and ease-of-use, it does not provide direct host access. In this case, where
maintaining access to the database's underlying operating system is a requirement, running Oracle on an EC2 instance would be the right
approach. This also allows you to set up replication (using Oracle Data Guard, for example) to another EC2 instance in a different AWS Region for
disaster recovery.
upvoted 3 times
3 weeks, 6 days ago
Selected Answer: C
Q: What is Amazon RDS Custom?
Amazon RDS Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying operating
system and database environment. Amazon RDS Custom automates setup, operation, and scaling of databases in the cloud while granting
customers access to the database and underlying operating system to configure settings, install patches, and enable native features to meet the
dependent application's requirements.
Q: What relational database engines does Amazon RDS Custom support?
Amazon RDS Custom supports the Oracle and SQL Server database engines.
https://aws.amazon.com/rds/custom/faqs/#:~:text=Amazon%20RDS%20Custom%20automates%20setup,meet%20the%20dependent%20applicati
on's%20requirements.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Clearly is A: The company also needs to maintain access to the database's underlying operating system
upvoted 3 times
1 month, 2 weeks ago
C - https://youtu.be/eWlgvtk6BpQ?t=343
upvoted 1 times
1 month, 2 weeks ago
General limitations for RDS Custom for Oracle replication
RDS Custom for Oracle replicas have the following limitations:
You can't create RDS Custom for Oracle replicas in read-only mode. However, you can manually change the mode of mounted replicas to read-
only, and from read-only to mounted. For more information, see the documentation for the create-db-instance-read-replica AWS CLI command.
You can't create cross-Region RDS Custom for Oracle replicas.
You can't change the value of the Oracle Data Guard CommunicationTimeout parameter. This parameter is set to 15 seconds for RDS Custom for
Oracle DB instances.
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: C
It literally says it needs to have access to the underlying OS.
Thank god for the community because whoever chose the answer for ExamTopics is going to fail people.
upvoted 2 times
1 month, 2 weeks ago
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html
You can't create cross-Region RDS Custom for Oracle replicas.
I think correct answer is A
upvoted 2 times
1 month, 4 weeks ago
Selected Answer: A
Because of access to underlying system
upvoted 1 times
1 month, 3 weeks ago
You must mean C then, yeah?
upvoted 1 times
2 months ago
Selected Answer: A
Cross-Region Custom Oracle replicas aren't supported
upvoted 2 times
2 months, 1 week ago
Selected Answer: C
The company also needs to maintain access to the database's underlying operating system.
= RDS CUSTOM
upvoted 2 times
2 months, 1 week ago
Selected Answer: C
At first I thought it was A but apparently RDS Custom for Oracle allows cross region replication and access to the OS.
https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-capabilities-in-database-environment/
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
Requirements and limitations for RDS Custom for Oracle replication: Cross-Region Oracle replicas aren't supported.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-rr.html
upvoted 2 times
2 months, 4 weeks ago
Keys:
1) upgrade the database to the most recent available version
2) needs to maintain access to the database's underlying operating system
These two are possible only based on the Oracle database to Amazon RDS Custom for Oracle.
So the correct answer must be Option (C).
upvoted 1 times
Topic 1
Question #134
A company wants to move its application to a serverless solution. The serverless solution needs to analyze existing and new data by using SL.
The company stores the data in an Amazon S3 bucket. The data requires encryption and must be replicated to a different AWS Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region kays (SSE-KMS). Use Amazon Athena to query the data.
B. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an
S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region keys (SSE-KMS). Use Amazon RDS to query the data.
C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon Athena to query the data.
D. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another
Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon RDS to query the data.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: C
SSE-KMS vs SSE-S3 - The last seems to have less overhead (as the keys are automatically generated by S3 and applied on data at upload, and don't
require further actions. KMS provides more flexibility, but in turn involves a different service, which finally is more "complex" than just managing
one (S3). So A and B are excluded. If you are in doubt, you are having 2 buckets in A and B, while just keeping one in C and D.
https://s3browser.com/server-side-encryption-types.aspx
Decide between C and D is deciding on Athena or RDS. RDS is a relational db, and we have documents on S3, which is the use case for Athena.
Athena is also serverless, which eliminates the need of controlling the underlying infrastructure and capacity. So C is the answer.
https://aws.amazon.com/athena/
upvoted 39 times
1 week, 5 days ago
See comment from Nicknameinvalid below. You get your answer.
upvoted 1 times
Highly Voted
8 months ago
Answer is A:
Amazon S3 Bucket Keys reduce the cost of Amazon S3 server-side encryption using AWS Key Management Service (SSE-KMS). This new bucket-
level key for SSE can reduce AWS KMS request costs by up to 99 percent by decreasing the request traffic from Amazon S3 to AWS KMS. With a
few clicks in the AWS Management Console, and without any changes to your client applications, you can configure your bucket to use an S3
Bucket Key for AWS KMS-based encryption on new objects.
The Existing S3 bucket might have uncrypted data - encryption will apply new data received after the applying of encryption on the new bucket.
upvoted 16 times
1 month, 1 week ago
If you want to use the cost argument: SSE-S3 is free so it's cheaper than any other encryption solution (all of the others have a cost), so the
answer should be C
upvoted 1 times
1 month, 2 weeks ago
Don't know what "kays" are, could they be a trap?
upvoted 1 times
2 weeks, 6 days ago
Kays = keys, mistype i think.
upvoted 1 times
5 months, 2 weeks ago
I didn't read anywhere in the question where cost was an issue of consideration, so how you made it a main issue here is beyond me.
upvoted 6 times
6 months, 3 weeks ago
Cost reduction is in comparison bet Bucket level KMS key and object level KMS key. Not between SSE-KMS and SSE-S3. Hence its a wrong
comparison
upvoted 2 times
Most Recent
5 days, 16 hours ago
Community vote distribution
A (52%)
C (48%)
Selected Answer: A
It creates a new S3, allowing for isolation and organization of the data in a serverless solution.
S3 CRR is used to automatically replicate the encrypted objects to an S3 in another Region, providing data replication and disaster recovery
capability.
SSE-KMS ensures the encryption of data at rest with a secure key management service.
Athena is a serverless query service that enables analyzing data in S3 using SQL queries without the need for managing infrastructure. It allows for
easy analysis of the existing and new data.
Option B suggests using Amazon RDS to query the data. However, Amazon RDS is a managed relational database service and not suitable for
analyzing data stored in S3 directly.
Option C suggests using server-side encryption with Amazon S3 managed encryption keys (SSE-S3). While SSE-S3 provides encryption, using SSE-
KMS with multi-Region keys offers better control and security for data encryption.
Option D also suggests using Amazon RDS to query the data, which is not the most suitable service for analyzing data in S3.
upvoted 1 times
1 week, 6 days ago
Selected Answer: C
C is correct.
https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 1 times
2 weeks, 1 day ago
answer c: https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html
upvoted 1 times
3 weeks, 6 days ago
Answer is C: By default, Amazon S3 doesn't replicate objects that are stored at rest using server-side encryption with KMS keys. To replicate
encrypted objects, you modify the bucket replication configuration to tell Amazon S3 to replicate these objects.
https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c03/view/14/
upvoted 1 times
3 weeks, 6 days ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-4.html
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Since sse-s3 is owned by aws, it will take care of decryption upon replication because you need to re-encrypt the object again in the new region
so, A
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: C
Question says "LEAST operational overhead" and that is the one that is configured by default.
The KMS is other level of security but have additional cost and additional operational effort. To upload the object you will need to identify the KMS
key you want to use and to access the object you will (again) to identify the KMS key you want to use.
All Amazon S3 buckets have encryption configured by default, and objects are automatically encrypted by using server-side encryption with
Amazon S3 managed keys (SSE-S3). This encryption setting applies to all objects in your Amazon S3 buckets.
If you need more control over your keys, such as managing key rotation and access policy grants, you can choose to use server-side encryption
with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
upvoted 1 times
1 month, 2 weeks ago
Folks who chose A, why do you need a new S3 bucket? Please explain
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: C
Answer C:
No where asked to have the same keys to be used for decryption and also to audit, main aim is to reduce operational overhead- so only with sse-
s3 this is possible.
upvoted 2 times
1 month, 4 weeks ago
Selected Answer: A
Option C is not the best solution because it uses server-side encryption with Amazon S3 managed encryption keys (SSE-S3), which is not multi-
Region. This means that the encryption key will only be available in the Region where the S3 bucket resides and cannot be replicated to another
Region. As a result, if the bucket needs to be replicated to another Region, it will require extra steps to ensure that the encryption keys are also
available in the new Region. In contrast, option A uses server-side encryption with AWS KMS multi-Region keys (SSE-KMS), which can be replicated
to another Region and makes it easier to manage the encryption keys in a multi-Region setup.
upvoted 9 times
2 months ago
Selected Answer: A
its A and not C beacause the question dosent mention if data is already encrypdted or not.
that's why the solution A can be used for both cases.
upvoted 2 times
2 months, 1 week ago
• S3 Replication Encryption Considerations:
○ Unencrypted objects and objects encrypted with SSE-S3 are replicated by default
○ Objects encrypted with SSE-C (customer provided key) are never replicated
○ For objects encrypted with SSE-KMS, you need to enable the option cause by default is not replicated you should enable and decrypt before
send to target and encrypt in the target:
§ Specify which KMS Key to encrypt the objects within the target bucket
§ Adapt the KMS Key Policy for the target key
§ An IAM Role with kms:Decrypt for the source KMS Key and kms:Encrypt for the target KMS Key
§ You might get KMS throttling errors, in which case you can ask for a Service Quotas increase
○ You can use multi-region AWS KMS Keys, but they are currently treated as independent keys by Amazon S3 (the object will still be decrypted and
then encrypted)
so it's C
upvoted 1 times
2 months, 1 week ago
Selected Answer: C
existing bucket and SSE-S3
upvoted 1 times
2 months, 1 week ago
Crap man! I hate these at 51/49 % where's the truth man? I'm just here to study... lol :)
upvoted 10 times
1 month ago
the thoughts of my mind as well:)
upvoted 2 times
1 month, 2 weeks ago
So many controversial questions
upvoted 2 times
2 months, 1 week ago
This is a tricky one. C would have less operational overhead, but CRR doesn't encrypt already existing objects, so you'd need to create a new
bucket, upload the existing data to the new bucket, and then encrypt the bucket.
upvoted 2 times
1 month ago
You can replicate existing bucket with all the objects, you do not need to create a new one for that challenge
upvoted 1 times
2 months, 1 week ago
A key issue in this question is the fact that the existing data on S3 requires encryption. Meaning it is currently unencrypted.
So which is easier? to create a new S3 and move the data or go through the process of encrypting already existing unencrypted data.
I will just create new S3.
upvoted 1 times
Topic 1
Question #135
A company runs workloads on AWS. The company needs to connect to a service from an external provider. The service is hosted in the provider's
VPC. According to the company’s security team, the connectivity must be private and must be restricted to the target service. The connection
must be initiated only from the company’s VPC.
Which solution will mast these requirements?
A. Create a VPC peering connection between the company's VPC and the provider's VPC. Update the route table to connect to the target
service.
B. Ask the provider to create a virtual private gateway in its VPC. Use AWS PrivateLink to connect to the target service.
C. Create a NAT gateway in a public subnet of the company’s VPUpdate the route table to connect to the target service.
D. Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the target service.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: D
**AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the
public internet**. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify your network
architecture.
Interface **VPC endpoints**, powered by AWS PrivateLink, connect you to services hosted by AWS Partners and supported solutions available in
AWS Marketplace.
https://aws.amazon.com/privatelink/
upvoted 20 times
Highly Voted
5 months, 1 week ago
Selected Answer: D
The solution that meets these requirements best is option D.
By asking the provider to create a VPC endpoint for the target service, the company can use AWS PrivateLink to connect to the target service. This
enables the company to access the service privately and securely over an Amazon VPC endpoint, without requiring a NAT gateway, VPN, or AWS
Direct Connect. Additionally, this will restrict the connectivity only to the target service, as required by the company's security team.
Option A VPC peering connection may not meet security requirement as it can allow communication between all resources in both VPCs.
Option B, asking the provider to create a virtual private gateway in its VPC and use AWS PrivateLink to connect to the target service is not the
optimal solution because it may require the provider to make changes and also you may face security issues.
Option C, creating a NAT gateway in a public subnet of the company’s VPC can expose the target service to the internet, which would not meet the
security requirements.
upvoted 5 times
Most Recent
5 days, 15 hours ago
Selected Answer: D
Option C meets the requirements of establishing a private and restricted connection to the service hosted in the provider's VPC. By asking the
provider to create a VPC endpoint for the target service, you can establish a direct and private connection from your company's VPC to the target
service. AWS PrivateLink ensures that the connectivity remains within the AWS network and does not require internet access. This ensures both
privacy and restriction to the target service, as the connection can only be initiated from your company's VPC.
A. VPC peering does not restrict access only to the target service.
B. PrivateLink is typically used for accessing AWS services, not external services in a provider's VPC.
C. NAT gateway does not provide a private and restricted connection to the target service.
Option D is the correct choice as it uses AWS PrivateLink and VPC endpoint to establish a private and restricted connection from the company's
VPC to the target service in the provider's VPC.
upvoted 1 times
4 weeks ago
VPC Endpoint (Target Service) - for specific services (not accessing whole vpc)
VPC Peering - (accessing whole VPC)
upvoted 2 times
1 month ago
VPC Peering Connection:
All resources in a VPC, such as ECSs and load balancers, can be accessed.
VPC Endpoint:
Community vote distribution
D (100%)
Allows access to a specific service or application. Only the ECSs and load balancers in the VPC for which VPC endpoint services are created can be
accessed.
upvoted 1 times
1 month ago
Selected Answer: D
Option D, but seems that it is vise versa. Customer needs to create Privatelink and and you VPC endpoint to connect to Privatelink
upvoted 1 times
1 month, 2 weeks ago
AWS PrivateLink / VPC Endpoint Services:
• Connect services privately from your service VPC to customers VPC
• Doesn’t need VPC Peering, public Internet, NAT Gateway, Route Tables
• Must be used with Network Load Balancer & ENI
upvoted 2 times
4 months, 1 week ago
Selected Answer: D
D. Here you are the one initiating the connection
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: D
PrivateLink is a more generalized technology for linking VPCs to other services. This can include multiple potential endpoints: AWS services, such as
Lambda or EC2; Services hosted in other VPCs; Application endpoints hosted on-premises.
https://www.tinystacks.com/blog-post/aws-vpc-peering-vs-privatelink-which-to-use-and-when/
upvoted 1 times
4 months, 4 weeks ago
Selected Answer: D
While VPC peering enables you to privately connect VPCs, AWS PrivateLink enables you to configure applications or services in VPCs as endpoints
that your VPC peering connections can connect to.
upvoted 1 times
6 months ago
Selected Answer: D
The solution that meets these requirements is Option D:
* Ask the provider to create a VPC endpoint for the target service.
* Use AWS PrivateLink to connect to the target service.
Option D involves asking the provider to create a VPC endpoint for the target service, which is a private connection to the service that is hosted in
the provider's VPC. This ensures that the connection is private and restricted to the target service, as required by the company's security team. The
company can then use AWS PrivateLink to connect to the target service over the VPC endpoint. AWS PrivateLink is a fully managed service that
enables you to privately access services hosted on AWS, on-premises, or in other VPCs. It provides secure and private connectivity to services by
using private IP addresses, which ensures that traffic stays within the Amazon network and does not traverse the public internet.
Therefore, Option D is the solution that meets the requirements.
upvoted 2 times
6 months ago
AWS PrivateLink documentation: https://docs.aws.amazon.com/privatelink/latest/userguide/what-is-privatelink.html
upvoted 1 times
6 months ago
D is right,if requirement was to be ok with public internet then option C was ok.
upvoted 1 times
6 months ago
Selected Answer: D
D (VPC endpoint) looks correct. Below are the differences between VPC Peering & VPC endpoints.
https://support.huaweicloud.com/intl/en-
us/vpcep_faq/vpcep_04_0004.html#:~:text=You%20can%20create%20a%20VPC%20endpoint%20to%20connect%20your%20local,connection%20o
ver%20an%20internal%20network.&text=VPC%20Peering%20supports%20only%20communications%20between%20two%20VPCs%20in%20the%2
0same%20region.&text=You%20can%20use%20Cloud%20Connect,between%20VPCs%20in%20different%20regions.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
D is the right answer
upvoted 1 times
6 months, 2 weeks ago
answer is D
upvoted 1 times
Topic 1
Question #136
A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and
accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)
A. Create an ongoing replication task.
B. Create a database backup of the on-premises database.
C. Create an AWS Database Migration Service (AWS DMS) replication server.
D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.
Correct Answer:
CD
Highly Voted
8 months, 1 week ago
Selected Answer: AC
AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully
operational during the migration, minimizing downtime to applications that rely on the database.
... With AWS Database Migration Service, you can also continuously replicate data with low latency from any supported source to any supported
target.
https://aws.amazon.com/dms/
upvoted 17 times
Most Recent
5 days, 15 hours ago
Selected Answer: AC
These two actions (AC) will help meet the requirements of migrating the on-premises PostgreSQL database to Amazon Aurora PostgreSQL while
keeping the on-premises database accessible and synchronized with the Aurora database. The ongoing replication task will ensure continuous data
replication between the on-premises database and Aurora. The AWS DMS replication server will facilitate the migration process and handle the
data replication.
B. Creating a database backup does not ensure ongoing synchronization.
D. Converting the database schema does not address the requirement of synchronization.
E. Creating an EventBridge rule only monitors synchronization, but doesn't handle migration.
The correct combination is A and C.
upvoted 1 times
2 weeks, 5 days ago
Answer is CD. Postgresql to Aurora Postgresql needed SCT.
https://aws.amazon.com/ko/dms/schema-conversion-tool/
upvoted 1 times
2 weeks, 6 days ago
Selected Answer: AC
Option A & C are the right answer.
upvoted 1 times
2 months ago
Selected Answer: AC
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-postgresql-database-to-aurora-postgresql.html
upvoted 1 times
3 months ago
A->https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.oracle2rds.replication.html
C->https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
upvoted 2 times
3 months, 1 week ago
Selected Answer: AC
This question is giving us two conditions to solve it. One of them is on-premise database must remain online and accessible during the migration
and the second one is Aurora database must remain synchronized with the on-premises database. So to meet them all A and C will be the correct
options for us.
PS: if the question was just asking us something related to the DB migration process alone, all options would be correct.
Community vote distribution
AC (85%)
CD (15%)
upvoted 1 times
5 months ago
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-postgresql-database-to-aurora-postgresql.html
This link talks about using DMS . I saw the other link pointing to SCT - not sure which one is correct
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: CD
DMS for database migration
SCT for having the same scheme
upvoted 3 times
4 months, 1 week ago
The source and destination are both MySQL so schema is not needed.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: AC
AWS Database Migration Service (AWS DMS)
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: AC
AC, here it is clearly shown https://docs.aws.amazon.com/zh_cn/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql.html
upvoted 3 times
5 months, 2 weeks ago
You nailed it !
upvoted 1 times
6 months ago
A. Create an ongoing replication task: An ongoing replication task can be used to continuously replicate data from the on-premises database to
the Aurora database. This will ensure that the Aurora database remains in sync with the on-premises database.
D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT): The AWS SCT can be used to convert the schema of the
on-premises database to a format that is compatible with Aurora. This will ensure that the data can be properly migrated and that the Aurora
database can be used with the same applications and queries as the on-premises database.
upvoted 2 times
4 months, 1 week ago
The source and destination are both MySQL so schema is not needed.
upvoted 1 times
6 months ago
Selected Answer: AC
To meet the requirements of maintaining an online and accessible on-premises database while migrating to Amazon Aurora PostgreSQL and
keeping the databases synchronized, a solutions architect should take the following actions:
Option A. Create an ongoing replication task. This will allow the architect to continuously replicate data from the on-premises database to the
Aurora database.
Option C. Create an AWS Database Migration Service (AWS DMS) replication server. This will allow the architect to use AWS DMS to migrate data
from the on-premises database to the Aurora database. AWS DMS can also be used to continuously replicate data between the two databases to
keep them synchronized.
upvoted 2 times
6 months ago
Selected Answer: CD
C&D ,SCT is required,its a mandate not an option.
upvoted 2 times
6 months ago
Selected Answer: CD
Answer is CD. Postgresql to Aurora Postgresql needed SCT.
https://aws.amazon.com/ko/dms/schema-conversion-tool/
upvoted 1 times
6 months ago
Answer is CD. Postgresql to Aurora Postgresql needed SCT.
https://aws.amazon.com/ko/dms/schema-conversion-tool/
upvoted 1 times
6 months, 1 week ago
Selected Answer: AC
You do not need to use SCT if you are migrating the same DB engine
• Ex: On-Premise PostgreSQL => RDS PostgreSQL
• The DB engine is still PostgreSQL (RDS is the platform)
upvoted 3 times
Topic 1
Question #137
A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account
independently upon request. The root email recipient missed a noti cation that was sent to the root user email address of one account. The
company wants to ensure that all future noti cations are not missed. Future noti cations must be limited to account administrators.
Which solution will meet these requirements?
A. Con gure the company’s email server to forward noti cation email messages that are sent to the AWS account root user email address to
all users in the organization.
B. Con gure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts.
Con gure AWS account alternate contacts in the AWS Organizations console or programmatically.
C. Con gure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and
forwarding those alerts to the appropriate groups.
D. Con gure all existing AWS accounts and all newly created accounts to use the same root user email address. Con gure AWS account
alternate contacts in the AWS Organizations console or programmatically.
Correct Answer:
D
Highly Voted
8 months, 1 week ago
Selected Answer: B
Use a group email address for the management account's root user
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-address
upvoted 21 times
Most Recent
5 days, 15 hours ago
Selected Answer: B
Option B ensures that all future notifications are not missed by configuring the AWS account root user email addresses as distribution lists that are
monitored by a few administrators. By setting up alternate contacts in the AWS Organizations console or programmatically, the notifications can be
sent to the appropriate administrators responsible for monitoring and responding to alerts. This solution allows for centralized management of
notifications and ensures they are limited to account administrators.
A. Floods all users with notifications, lacks granularity.
C. Manual forwarding introduces delays, centralizes responsibility.
D. No flexibility for specific account administrators, limits customization.
upvoted 1 times
1 week, 3 days ago
all admins need access or else some wont get the right mails and cant do their job,
sending it only to a few would disrupt the workflowso it is D
upvoted 1 times
2 weeks, 6 days ago
Selected Answer: D
From the links provided below there are no mention of having a distribution list capability within AWS:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-address
As per link for best practices:
Use a group email address for the management account's root user!
upvoted 1 times
4 weeks ago
The clue is in the pudding!!
Question: account "administrators"
Answer: Configure all AWS account root user email addresses as distribution lists that go to a few "administrators"
upvoted 1 times
2 months ago
Selected Answer: B
Option A: wrong - sends email to everybody
Option B: correct (but sub-optimal because distribution lists aren't all that secure)
Option C: wrong - single point of failure on the new administrator
Option D: wrong - each root email address must be unique, you can't change them all to the same one
Community vote distribution
B (83%)
D (17%)
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
The more aligned answer to this article:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#best-practices_mgmt-acct_email-address
is B.
D would be best if it'd said that the email you configure as "root user email address" will be a distribution list.
The phrase "all future notifications are not missed" points to D, cos' it said:
".. and all newly created accounts to use the same root user email address"
so the future account that will be created will be covered with the business policy.
It's not 100% clear, but I'll choose B.
upvoted 2 times
2 months, 3 weeks ago
Una pregunta si la gente va votando las preguntas por que los administradores no cambian la respuesta correcta. Es a interpretación y ya?
upvoted 1 times
2 months, 3 weeks ago
El administrador de "examtopics" pasa olímpicamente de marcar la respuesta correcta y es evidente que muchas respuestas que indica como
"correctas" no lo son. Dice muy poco del servicio que dan.
upvoted 1 times
3 months, 1 week ago
Using the method of crossing out the option that does not fit....
Option A: address to all users of organization (wrong)
Option B: go to a few administration who can respond to alerts (question says to send notification to administrators not a selected few )
Option C: send to one administrator and giving him responsibility (wrong)
Option D: correct (as this is the one option left after checking all others).
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: D
Option B does not meet the requirements because it would require configuring all AWS account root user email addresses as distribution lists,
which is not necessary to meet the requirements.
upvoted 2 times
6 months ago
Unless I am reading this wrong from AWS, it seems D is proper as it says to use a single account and then set to forward to other emails.
Use an email address that forwards received messages directly to a list of senior business managers. In the event that AWS needs to contact the
owner of the account, for example, to confirm access, the email is distributed to multiple parties. This approach helps to reduce the risk of delays in
responding, even if individuals are on vacation, out sick, or leave the business.
upvoted 2 times
6 months ago
Selected Answer: D
To meet the requirements of ensuring that all future notifications are not missed and are limited to account administrators, the company should
take the following action:
Option D. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account
alternate contacts in the AWS Organizations console or programmatically.
By configuring all AWS accounts to use the same root user email address and setting up AWS account alternate contacts, the company can ensure
that all notifications are sent to a single email address that is monitored by one or more administrators. This will allow the company to ensure that
all notifications are received and responded to promptly, without the risk of notifications being missed.
upvoted 3 times
5 months, 1 week ago
Option D would not meet the requirement of limiting the notifications to account administrators. Instead, it is better to use option B, which is to
configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts. This way, the
company can ensure that the notifications are received by the appropriate people and that they are not missed. Additionally, AWS account
alternate contacts can be configured in the AWS Organizations console or programmatically, which allows the company to have more granular
control over who receives the notifications.
upvoted 4 times
6 months ago
B makes more sense
upvoted 1 times
6 months, 1 week ago
answer b is makes more sense
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
B makes more sense and is a best practise
upvoted 1 times
8 months, 1 week ago
Selected Answer: B
B makes better sense in the context
upvoted 3 times
Topic 1
Question #138
A company runs its ecommerce application on AWS. Every new order is published as a massage in a RabbitMQ queue that runs on an Amazon EC2
instance in a single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This
application stores the details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?
A. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Create another Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.
B. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for
EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
C. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
D. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2
instances that host the application. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database
Correct Answer:
B
Highly Voted
8 months, 1 week ago
Selected Answer: B
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed.
Deciding between A and B means deciding to go for an AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS option has
less operational impact, as provide as a service the tools and software required. Consider for instance, the effort to add an additional node like a
read replica, to the DB.
https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-deployment.html
https://aws.amazon.com/rds/postgresql/
upvoted 16 times
6 months, 4 weeks ago
This also helps anyone in doubt; https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-deployment.html
upvoted 1 times
8 months ago
Yes but active/standby is fault tolerance, not HA. I would concede after thinking about it that B is probably the answer that will be marked
correct but its not a great question.
upvoted 2 times
Most Recent
5 days, 15 hours ago
Selected Answer: B
Option B provides the highest availability with the least operational overhead. By migrating the queue to a redundant pair of RabbitMQ instances
on Amazon MQ, the messaging system becomes highly available. Creating a Multi-AZ Auto Scaling group for EC2 instances hosting the application
ensures that it can automatically scale and maintain availability across multiple Availability Zones. Migrating the database to a Multi-AZ
deployment of Amazon RDS for PostgreSQL provides automatic failover and data replication across multiple Availability Zones, enhancing
availability and reducing operational overhead.
A. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and the PostgreSQL database.
C. Incorrect because it does not provide redundancy for the RabbitMQ queue and does not address the high availability requirement for the
PostgreSQL database.
D. Incorrect because it does not address the high availability requirement for the RabbitMQ queue and does not provide redundancy for the
application instances.
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: B
B for me.
upvoted 1 times
6 months ago
Selected Answer: B
Community vote distribution
B (100%)
To meet the requirements of providing the highest availability with the least operational overhead, the solutions architect should take the following
actions:
* By migrating the queue to Amazon MQ, the architect can take advantage of the built-in high availability and failover capabilities of the service,
which will help ensure that messages are delivered reliably and without interruption.
* By creating a Multi-AZ Auto Scaling group for the EC2 instances that host the application, the architect can ensure that the application is highly
available and able to handle increased traffic without the need for manual intervention.
* By migrating the database to a Multi-AZ deployment of Amazon RDS for PostgreSQL, the architect can take advantage of the built-in high
availability and failover capabilities of the service, which will help ensure that the database is always available and able to handle increased traffic.
Therefore, the correct answer is Option B.
upvoted 3 times
6 months ago
Selected Answer: B
B is right all explanations below are correct
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B is right answer
upvoted 1 times
7 months, 1 week ago
B for me
upvoted 1 times
Topic 1
Question #139
A reporting team receives les each day in an Amazon S3 bucket. The reporting team manually reviews and copies the les from this initial S3
bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more les in
larger sizes to the initial S3 bucket.
The reporting team wants to move the les automatically analysis S3 bucket as the les enter the initial S3 bucket. The reporting team also wants
to use AWS Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data les to a
pipeline in Amazon SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
A. Create a Lambda function to copy the les to the analysis S3 bucket. Create an S3 event noti cation for the analysis S3 bucket. Con gure
Lambda and SageMaker Pipelines as destinations of the event noti cation. Con gure s3:ObjectCreated:Put as the event type.
B. Create a Lambda function to copy the les to the analysis S3 bucket. Con gure the analysis S3 bucket to send event noti cations to
Amazon EventBridge (Amazon CloudWatch Events). Con gure an ObjectCreated rule in EventBridge (CloudWatch Events). Con gure Lambda
and SageMaker Pipelines as targets for the rule.
C. Con gure S3 replication between the S3 buckets. Create an S3 event noti cation for the analysis S3 bucket. Con gure Lambda and
SageMaker Pipelines as destinations of the event noti cation. Con gure s3:ObjectCreated:Put as the event type.
D. Con gure S3 replication between the S3 buckets. Con gure the analysis S3 bucket to send event noti cations to Amazon EventBridge
(Amazon CloudWatch Events). Con gure an ObjectCreated rule in EventBridge (CloudWatch Events). Con gure Lambda and SageMaker
Pipelines as targets for the rule.
Correct Answer:
A
Highly Voted
8 months ago
Selected Answer: D
i go for D here
A and B says you are copying the file to another bucket using lambda,
C an D just uses S3 replication to copy the files,
They are doing exactly the same thing while C and D do not require setting up of lambda, which should be more efficient
The question says the team is manually copying the files, automatically replicating the files should be the most efficient method vs manually
copying or copying with lambda.
upvoted 17 times
1 week, 2 days ago
yes d because of least operational overhead and also s3 event notification can only send to sns.sqs.and lambda , not to sagemaker.eventbridge
can send to sagemaker
upvoted 2 times
Highly Voted
8 months, 1 week ago
Selected Answer: B
C and D aren't answers as replicating the S3 bucket isn't efficient, as other teams are starting to use it to store larger docs not related to the
reporting, making replication not useful.
As Amazon SageMaker Pipelines, ..., is now supported as a target for routing events in Amazon EventBridge, means the answer is B
https://aws.amazon.com/about-aws/whats-new/2021/04/new-options-trigger-amazon-sagemaker-pipeline-executions/
upvoted 15 times
1 week, 2 days ago
but B is not least operational overhead , D is least operational overhead
upvoted 1 times
5 months, 2 weeks ago
Nowhere in the question did they mention that other files were unrelated to reporting ....
"The reporting team wants to move the files automatically to analysis S3 bucket as the files enter the initial S3 bucket" where did it say they
were unrelated files ? except for conjecture.
upvoted 3 times
2 months, 3 weeks ago
You misinterpret it ... the reporting team is overload, cos' more teams request their services uploading more data to the bucket. That's the
reason reporting team need to automate the process. So ALL the bucket objects need to be copied to other bucket, and the replication is better
an cheaper than use Lambda. So the answer is D.
Community vote distribution
D (69%)
B (24%)
6%
upvoted 2 times
6 months, 1 week ago
I think you are mis-interpreting the question. I think you need to use all files, including the ones provided by other teams, otherwise how can
you tell what files to copy? I think the point of this statement is to show that more files are in use, and being copied at different times, rather
than suggesting you need to differentiate between the two sources of files.
upvoted 4 times
Most Recent
5 days, 15 hours ago
Selected Answer: D
Option D is correct because it combines S3 replication, event notifications, and Amazon EventBridge to automate the copying of files from the
initial S3 bucket to the analysis S3 bucket. It also allows for the execution of Lambda functions and integration with SageMaker Pipelines.
Option A is incorrect because it suggests manually copying the files using a Lambda function and event notifications, but it does not utilize S3
replication or EventBridge for automation.
Option B is incorrect because it suggests using S3 event notifications directly with EventBridge, but it does not involve S3 replication or utilize
Lambda for copying the files.
Option C is incorrect because it only involves S3 replication and event notifications without utilizing EventBridge or Lambda functions for further
processing.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-
destinations
S3 can NOT send event notification to SageMaker. This rules out C. you have to send to • Amazon EventBridge 1st then to SageMaker
upvoted 4 times
2 months, 2 weeks ago
Selected Answer: D
Why I believe it is not C? The key here is in the s3:ObjectCreated:"Put". The replication will not fire the s3:ObjectCreated:Put. event. See link here:
https://aws.amazon.com/blogs/aws/s3-event-notification/
upvoted 2 times
3 months ago
Selected Answer: D
D takes care of automated moving and lambda for pattern matching are covered efficiently in D.
upvoted 1 times
3 months, 1 week ago
only one destination type can be specified for each event notification in S3 event notifications
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: A
Answer is A
The statement says move the file. Replication won't move the file it will just create a copy. so Obviously C and D are out. When you Event
notification and Lambda why we need EVent bridge as more service. So answer is A
upvoted 1 times
1 week, 4 days ago
I searched S3 documentation and couldn't find where s3 event notification can trigger sagemaker pipelines. It can SNS,SQS and lambda. I am
not sure A is the right choice.
upvoted 1 times
2 months, 3 weeks ago
A and B says : create a lambda function to COPY also. Then, folowing your idea, A and B are out too... ;)
obviously move argument isn't accute in this question
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: B
Using lambda is one of the requirements. Sns, sqs, lambda, and event bridge are the only s3 notification destinations
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html.
upvoted 1 times
5 months, 1 week ago
both A and D options can meet the requirements with the least operational overhead as they both use automatic event-driven mechanisms (S3
event notifications and EventBridge rules) to trigger the Lambda function and copy the files to the analysis S3 bucket. The Lambda function can
then run the pattern-matching code, and the files can be sent to the SageMaker pipeline.
Option A, directly copying the files to the analysis S3 bucket using a Lambda function, is more straight forward, option D using S3 replication and
EventBridge rules is more flexible and can be more powerful as it allows you to use more complex event-driven flows.
upvoted 2 times
5 months, 2 weeks ago
Ans : D
S3 event notification can only send notifications to SQS. SNS and Lambda, BUT not Sagamaker
https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
S3 event notification can send notification to SNS, SQS and Lambda, but not SageMaker
upvoted 8 times
5 months, 2 weeks ago
Selected Answer: D
A and B are ruled out as it requires an extra Lambda job to do the copy while S3 replication will take care of it with little to no overhead.
C is incorrect because, S3 notifcations are not supported on Sagemake pipeline
(https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-
destinations)
upvoted 5 times
5 months, 3 weeks ago
Selected Answer: C
Since we are working already on S3 buckets, configuring S3 event notification (with evet type: s3:ObjectCreated:Put) is much easier than doing the
same through EventBridge (which is an additional service in this case). Less operational overhead.
upvoted 4 times
5 months, 3 weeks ago
Selected Answer: D
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/NotificationHowTo.html
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: D
I would recommend option D as it is the most efficient way to meet the requirements with the least operational overhead.
Option D involves configuring S3 replication between the two S3 buckets, which will automatically copy the files from the initial S3 bucket to the
analysis S3 bucket as they are added. This eliminates the need to manually copy the files every day and will ensure that the analysis S3 bucket
always has the most recent data.
upvoted 2 times
5 months, 4 weeks ago
In addition, configuring the analysis S3 bucket to send event notifications to Amazon EventBridge (CloudWatch Events) and creating an
ObjectCreated rule allows you to trigger Lambda functions and SageMaker Pipelines when new objects are created in the analysis S3 bucket.
This allows you to perform pattern-matching and data processing on the copied data automatically as it is added to the analysis S3 bucket.
Option A and option C involve manually copying the files to the analysis S3 bucket, which is not an efficient solution given the increased volume
of data that the reporting team is expecting. Option B does not involve S3 replication, so it does not address the requirement to automatically
copy the data to the analysis S3 bucket.
upvoted 1 times
6 months ago
Selected Answer: D
Options A and B are incorrect because it involves creating a Lambda function to copy the files to the analysis S3 bucket, which is unnecessary. The
requirement is to move the files automatically to the analysis S3 bucket as soon as they are added to the initial S3 bucket. This can be achieved
more efficiently using S3 replication, which is not mentioned in Options A and B.
Option C is incorrect because it involves configuring S3 replication between the S3 buckets, which is correct. However, it does not involve
configuring the analysis S3 bucket to send event notifications to Amazon EventBridge (CloudWatch Events). This is necessary to trigger the
subsequent actions (i.e., running pattern-matching code using Lambda functions and sending data files to a pipeline in SageMaker Pipelines).
Therefore, the correct answer is Option D.
upvoted 5 times
6 months ago
Selected Answer: D
Going with D
upvoted 1 times
Topic 1
Question #140
A solutions architect needs to help a company optimize the cost of running an application on AWS. The application will use Amazon EC2
instances, AWS Fargate, and AWS Lambda for compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2
instances can be interrupted at any time. The application front end will run on Fargate, and Lambda will serve the API layer. The front-end
utilization and API layer utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this application? (Choose two.)
A. Use Spot Instances for the data ingestion layer
B. Use On-Demand Instances for the data ingestion layer
C. Purchase a 1-year Compute Savings Plan for the front end and API layer.
D. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.
E. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.
Correct Answer:
AC
Highly Voted
8 months ago
Selected Answer: AC
EC2 instance Savings Plan saves 72% while Compute Savings Plans saves 66%. But according to link, it says "Compute Savings Plans provide the
most flexibility and help to reduce your costs by up to 66%. These plans automatically apply to EC2 instance usage regardless of instance family,
size, AZ, region, OS or tenancy, and also apply to Fargate and Lambda usage." EC2 instance Savings Plans are not applied to Fargate or Lambda
upvoted 10 times
Most Recent
5 days, 15 hours ago
Selected Answer: AC
Using Spot Instances for the data ingestion layer will provide the most cost-effective option for sporadic and unpredictable workloads, as Spot
Instances offer significant cost savings compared to On-Demand Instances (Option A).
Purchasing a 1-year Compute Savings Plan for the front end and API layer will provide cost savings for predictable utilization over the course of a
year (Option C).
Option B is less cost-effective as it suggests using On-Demand Instances for the data ingestion layer, which does not take advantage of cost-saving
opportunities.
Option D suggests purchasing 1-year All Upfront Reserved instances for the data ingestion layer, which may not be optimal for sporadic and
unpredictable workloads.
Option E suggests purchasing a 1-year EC2 instance Savings Plan for the front end and API layer, but Compute Savings Plans are typically more
suitable for predictable workloads.
upvoted 2 times
4 weeks ago
Spot instances for data injection because the task can be terminated at anytime and tolerate disruption. Compute Saving Plan is cheaper than EC2
instance Savings plan.
upvoted 1 times
1 month ago
EC2 instance Savings Plans are not applied to Fargate or Lambda
upvoted 1 times
3 months, 1 week ago
Why not B?
upvoted 1 times
2 months, 3 weeks ago
because onDemand is more expensive than spot additionally that the workload has no problem with being interrupted at any time
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: AC
Compute Savings Plans can be used for EC2 instances and Fargate. Whereas EC2 Savings Plans support EC2 only.
upvoted 4 times
Community vote distribution
AC (100%)
6 months ago
Selected Answer: AC
To optimize the cost of running this application on AWS, you should consider the following options:
A. Use Spot Instances for the data ingestion layer
C. Purchase a 1-year Compute Savings Plan for the front-end and API layer
Therefore, the most cost-effective solution for hosting this application would be to use Spot Instances for the data ingestion layer and to purchase
either a 1-year Compute Savings Plan or a 1-year EC2 instance Savings Plan for the front-end and API layer.
upvoted 1 times
6 months ago
Selected Answer: AC
Too obvious answer.
upvoted 1 times
6 months ago
Selected Answer: AC
AC
can be interrupted at any time => spot
upvoted 2 times
6 months ago
A,E::
Savings Plan — EC2
Savings Plan offers almost the same savings from a cost as RIs and adds additional Automation around how the savings are being applied. One
way to understand is to say that EC2 Savings Plan are Standard Reserved Instances with automatic switching depending on Instance types being
used within the same instance family and additionally applied to ECS Fargate and Lambda.
Savings Plan — Compute
Savings Plan offers almost the same savings from a cost as RIs and adds additional Automation around how the savings are being applied. For
example, they provide flexibility around instance types and regions so that you don’t have to monitor new instance types that are being launched.
It is also applied to Lambda and ECS Fargate workloads. One way to understand is to say that Compute Savings Plan are Convertible Reserved
Instances with automatic switching depending on Instance types being used.
upvoted 1 times
6 months, 1 week ago
Selected Answer: AC
A and C
upvoted 1 times
7 months, 2 weeks ago
its A and C . https://www.densify.com/finops/aws-savings-plan
upvoted 1 times
8 months ago
Selected Answer: AC
api is not EC2.need to use compute savings plan
upvoted 4 times
8 months, 1 week ago
E makes more sense than C. See https://aws.amazon.com/savingsplans/faq/, EC2 instance Savings Plan (up to 72% saving) costs less than Compute
Savings Plan (up to 66% saving)
upvoted 4 times
1 month, 3 weeks ago
I Agree
upvoted 1 times
8 months ago
Isn't the EC2 Instance Savings Plan not applicable to Fargate and Lambda?
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 6 times
Topic 1
Question #141
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each
user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an
Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the
world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB
as an origin.
B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the
closest Region.
C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content
directly from the ALB.
D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in
the closest Region.
Correct Answer:
B
Highly Voted
8 months, 2 weeks ago
Selected Answer: A
Answer is A.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content
https://www.examtopics.com/discussions/amazon/view/81081-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 19 times
Highly Voted
8 months ago
Selected Answer: B
Answer should be B,
CloudFront reduces latency if its only static content, which is not the case here.
For Dynamic content, CF cant cache the content so it sends the traffic through the AWS Network which does reduces latency, but it still has to
travel through another region.
For the case with 2 region and Route 53 latency routing, Route 53 detects the nearest resouce (with lowest latency) and routes the traffic there.
Because the traffic does not have to travel to resources far away, it should have the least latency in this case here.
upvoted 8 times
5 months, 3 weeks ago
CloudFront does not cache dynamic content. But Latency can be still low for dynamic content because the traffic is on the AWS global network
which is faster than the internet.
upvoted 3 times
5 months, 1 week ago
Amazon CloudFront speeds up distribution of your static and dynamic web content, such as .html, .css, .php, image, and media files. When
users request your content, CloudFront delivers it through a worldwide network of edge locations that provide low latency and high
performance.
upvoted 3 times
7 months, 3 weeks ago
Cf works for both static and dynamic content
upvoted 8 times
7 months ago
Can you pls. provide a ref. link from where this info. got extracted?
upvoted 1 times
Most Recent
5 days, 6 hours ago
Selected Answer: A
CloudFront is a CDN thats is well adapted for dynamic content.
News, sports, local, weather
Web applications of this type often have a geographic focus with customized content for end users. Content can be cached at edge locations for
varying lengths of time depending on type of content. For example, hourly updates can be cached for up to an hour, while urgent alerts may only
Community vote distribution
A (73%)
B (27%)
be cached for a few seconds so end users always have the most up to date information available to them. A content delivery network is a great
platform for serving common types of experiences for news and weather such as articles, dynamic map tiles, overlays, forecasts, breaking news or
alert tickers, and video.
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times
3 weeks, 6 days ago
I would definitely go to C
f you are serving dynamic content such as web applications or APIs directly from an Amazon Elastic Load Balancer (ELB) or Amazon EC2 instances
to end users on the internet, you can improve the performance, availability, and security of your content by using Amazon CloudFront as your
content delivery network.
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: A
A is correct. CF distributes the content globally. Why not deploy the application in 4 o 5 regions instead of 2? It's an arbitrary choice, that's one of
the reasons why B and D are not a solid solution.
upvoted 1 times
1 month ago
Selected Answer: A
I gor for option A, CF uses edge locations to speed up S3 content, both static and dynamic, hence A is right ans.
upvoted 1 times
1 month, 1 week ago
Selected Answer: B
I would say B.
2 regions is always better if you aim for better distribution of the traffic. This will split the amount of request send to the Single EC2 instance by half
=> indirectly improve latency.
It's true that CloudFront improve latency but it's hard to say if this will be true for ALL users. Having second region will definitely improve the
performance for the users will less latency atm.
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: A
A is correct. Cloudfront can serve both static and dynamic content fast.
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times
1 month, 4 weeks ago
Selected Answer: B
the lowest latency (option B) is not always equal to the closest resource (option D). And the requirement ask for lowest latency
upvoted 1 times
2 months, 1 week ago
A.
If you are serving dynamic content such as web applications or APIs directly from an Amazon Elastic Load Balancer (ELB) or Amazon EC2 instances
to end users on the internet, you can improve the performance, availability, and security of your content by using Amazon CloudFront as your
content delivery network.
https://aws.amazon.com/cloudfront/dynamic-content/
upvoted 2 times
2 months, 1 week ago
CloudFront caches the static content. It also accepts requests for dynamic content and forward it to the ALB via AWS backbone (very fast).
upvoted 1 times
2 months, 2 weeks ago
ANSWER -B :To achieve the least amount of latency for all users, the best approach would be to deploy the application stack in two AWS Regions
and use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest region. This approach will ensure that users are
directed to the lowest latency endpoint available based on their location, which can significantly reduce latency and improve the performance of
the application.
While using Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin can also improve the performance of the
application, it may not be the best approach to achieve the least amount of latency for all users. This is because CloudFront may not always direct
users to the closest endpoint based on their location, which can result in higher latency for some users.
Therefore, using an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest region is the best approach to achieve
the least amount of latency for all users
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
Cloudfront is global and serves all regions equally. Route 53 latency option provides the lowest latency option of the two regions, but this could
still be terrible latency for users outside of those regions.
upvoted 2 times
3 months, 1 week ago
Having stack in two Regions is always better than one Region, when portal has to be used globally. This crosses out Option A and C.
Requirement is to have LEAST amount of latency , so instead of choosing Route 53 Geolocation routing policy (Option D), we should go for Latency
based routing; which is Option B.
upvoted 1 times
3 months, 2 weeks ago
Something is wrong with the question, or the answers.
The best way to do it is deploy the website in one region, use CloudFront to reduce latency and use a geolocation Route 53 routing policy as the
application provides local alerts and weather alerts.
Without geolocation the application will provide local alerts in London to people living in Australia.
Answer D is the closet, however - it's wrong.
upvoted 3 times
3 months, 2 weeks ago
Selected Answer: A
Use Amazon CloudFront as a content delivery network (CDN) to distribute static and dynamic content to edge locations around the world
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: A
A for me.
upvoted 1 times
Topic 1
Question #142
A gaming company is designing a highly available architecture. The application runs on a modi ed Linux kernel and supports only UDP-based
tra c. The company needs the front-end tier to provide the best possible user experience. That tier must have low latency, route tra c to the
nearest edge location, and provide static IP addresses for entry into the application endpoints.
What should a solutions architect do to meet these requirements?
A. Con gure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda for the application in AWS Application
Auto Scaling.
B. Con gure Amazon CloudFront to forward requests to a Network Load Balancer. Use AWS Lambda for the application in an AWS Application
Auto Scaling group.
C. Con gure AWS Global Accelerator to forward requests to a Network Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.
D. Con gure Amazon API Gateway to forward requests to an Application Load Balancer. Use Amazon EC2 instances for the application in an
EC2 Auto Scaling group.
Correct Answer:
C
Highly Voted
8 months ago
Correct Answer: C
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world.
CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and
dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge
to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT),
or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services
integrate with AWS Shield for DDoS protection.
upvoted 43 times
6 months, 3 weeks ago
On top of this, lambda would not be able to run application that is running on a modified Linux kernel. The answer is C .
upvoted 3 times
5 months, 3 weeks ago
Explained very well. ty
upvoted 2 times
7 months, 1 week ago
Thank you, your explanation helped me to better understand even the answer of question 29
upvoted 3 times
Highly Voted
6 months ago
Selected Answer: C
The correct answer is Option C. To meet the requirements;
* AWS Global Accelerator is a service that routes traffic to the nearest edge location, providing low latency and static IP addresses for the front-end
tier. It supports UDP-based traffic, which is required by the application.
* A Network Load Balancer is a layer 4 load balancer that can handle UDP traffic and provide static IP addresses for the application endpoints.
* An EC2 Auto Scaling group ensures that the required number of Amazon EC2 instances is available to meet the demand of the application. This
will help the front-end tier to provide the best possible user experience.
Option A is not a valid solution because Amazon Route 53 does not support UDP traffic.
Option B is not a valid solution because Amazon CloudFront does not support UDP traffic.
Option D is not a valid solution because Amazon API Gateway does not support UDP traffic.
upvoted 5 times
6 months ago
My mistake, correction on Option A, it is the Application Load Balancers do not support UDP traffic. They are designed to load balance HTTP
and HTTPS traffic, and they do not support other protocols such as UDP.
upvoted 1 times
Most Recent
5 days, 14 hours ago
Selected Answer: C
Community vote distribution
C (100%)
AWS Global Accelerator is designed to improve the availability and performance of applications by routing traffic through the AWS global network
to the nearest edge locations, reducing latency. By configuring AWS Global Accelerator to forward requests to a Network Load Balancer, UDP-
based traffic can be efficiently distributed across multiple EC2 instances in an Auto Scaling group. Using Amazon EC2 instances for the application
allows for customization of the Linux kernel and support for UDP-based traffic. This solution provides static IP addresses for entry into the
application endpoints, ensuring consistent access for users.
Option A suggests using AWS Lambda for the application, but Lambda is not suitable for long-running UDP-based applications and may not
provide the required low latency.
Option B suggests using CloudFront, which is primarily designed for HTTP/HTTPS traffic and does not have native support for UDP-based traffic.
Option D suggests using API Gateway, which is primarily used for RESTful APIs and does not support UDP-based traffic.
upvoted 1 times
4 weeks ago
aws global accelarator provides static IP addresses.
upvoted 1 times
1 month ago
Selected Answer: C
My choice is option C, due to the followings: Amazon Global accelator route the traffic to nearest edge locations, it supports UDP-based traffic, and
it provides static ip addresses as well, hence C is right answer.
upvoted 1 times
2 months, 4 weeks ago
Answer : C
CloudFront : Doesn't support static IP addresses
ALB : Doesn't support UDP
upvoted 1 times
3 months, 3 weeks ago
C - https://aws.amazon.com/global-accelerator/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
To meet the requirements of providing low latency, routing traffic to the nearest edge location, and providing static IP addresses for entry into the
application endpoints, the best solution would be to use AWS Global Accelerator. This service routes traffic to the nearest edge location and
provides static IP addresses for the application endpoints. The front-end tier should be configured with a Network Load Balancer, which can handle
UDP-based traffic and provide high availability. Option C, "Configure AWS Global Accelerator to forward requests to a Network Load Balancer. Use
Amazon EC2 instances for the application in an EC2 Auto Scaling group," is the correct answer.
upvoted 1 times
6 months ago
Selected Answer: C
C is obvious choice here.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
C as Global Accelerator is the best choice for UDP based traffic needing static IP address.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
c correct
upvoted 1 times
6 months, 2 weeks ago
CloudFront is designed to handle HTTP protocol meanwhile Global Accelerator is best used for both HTTP and non-HTTP protocols such as TCP
and UDP. HENCE C is the ANSWER!
upvoted 1 times
7 months ago
C is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: C
Cloud Fronts supports both Static and Dynamic and Global Accelerator means low latency over UDP
upvoted 1 times
Topic 1
Question #143
A company wants to migrate its existing on-premises monolithic application to AWS. The company wants to keep as much of the front-end code
and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage
each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?
A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as
targets.
D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the
target.
Correct Answer:
D
Highly Voted
8 months ago
I think the answer here is "D" because usually when you see terms like "monolithic" the answer will likely refer to microservices.
upvoted 21 times
Highly Voted
7 months, 3 weeks ago
Selected Answer: D
D is organic pattern, lift and shift, decompose to containers, first making most use of existing code, whilst new features can be added over time
with lambda+api gw later.
A is leapfrog pattern. requiring refactoring all code up front.
upvoted 13 times
Most Recent
4 days, 19 hours ago
Selected Answer: D
ECS provides a highly scalable and managed environment for running containerized applications, reducing operational overhead. By setting up an
ALB with ECS as the target, traffic can be distributed across multiple instances of the application for scalability and availability. This solution enables
different teams to manage each application independently, promoting team autonomy and efficient development.
A is more suitable for event-driven and serverless workloads. It may not be the ideal choice for migrating a monolithic application and maintaining
the existing codebase.
B integrates with Lambda and API Gateway, it may not provide the required flexibility for breaking the application into smaller applications and
managing them independently.
C would involve managing the infrastructure and scaling manually. It may result in higher operational overhead compared to using a container
service like ECS.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: D
I was confused about this, but actually Amazon ECS service can be configured to use Elastic Load Balancing to distribute traffic evenly across the
tasks in your service.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-application-load-balancer.html
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
monolithic = microservices = ECS
upvoted 3 times
2 months, 1 week ago
I thought ALB is about distributing load. How do we want to use it to connect decoupled applications that needs to call themselves. I am kind of
confused why most people are going with D.
I think I will go with A.
upvoted 2 times
3 months, 3 weeks ago
Community vote distribution
D (93%)
7%
I think the answer is A
B is wrong because the requirement is not for the backend. C and D are not suitable because the ALB Is not best suited for middle tier applications.
upvoted 2 times
5 months, 1 week ago
I will go with A because - less operational and High availability (Lambda has these)
If it is ECS, operational overhead and can only be scaled up to an EC2 assigned under it.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
To meet the requirement of breaking the application into smaller applications that can be managed by different teams, while minimizing
operational overhead and providing high scalability, the best solution would be to host the applications on Amazon Elastic Container Service
(Amazon ECS). Amazon ECS is a fully managed container orchestration service that makes it easy to run, scale, and maintain containerized
applications. Additionally, setting up an Application Load Balancer with Amazon ECS as the target will allow the company to easily scale the
application as needed. Option D, "Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer
with Amazon ECS as the target," is the correct answer.
upvoted 1 times
5 months, 4 weeks ago
Selected Answer: D
. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the target.
Hosting the application on Amazon ECS would allow the company to break the monolithic application into smaller, more manageable applications
that can be managed by different teams. Amazon ECS is a fully managed container orchestration service that makes it easy to deploy, run, and
scale containerized applications. By setting up an Application Load Balancer with Amazon ECS as the target, the company can ensure that the
solution is highly scalable and minimizes operational overhead.
upvoted 1 times
6 months ago
Selected Answer: D
The correct answer is Option D. To meet the requirements, the company should host the application on Amazon Elastic Container Service (Amazon
ECS) and set up an Application Load Balancer with Amazon ECS as the target.
Option A is not a valid solution because AWS Lambda is not suitable for hosting long-running applications.
Option B is not a valid solution because AWS Amplify is a framework for building, deploying, and managing web applications, not a hosting
solution.
Option C is not a valid solution because Amazon EC2 instances are not fully managed container orchestration services. The company will need to
manage the EC2 instances, which will increase operational overhead.
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
It can be C or D depending on how easy it would be to containerize the application. If application needs persistent local data store then C would be
a better choice.
Also from the usecase description it is not clear whether application is http based application or not though all options uses ALB only so we can
safely assume that this is http based application only.
upvoted 2 times
6 months, 1 week ago
After reading this question again A will be minimum operational overhead.
D has higher operational overhead as D will have operational overhead of scaling EC2 servers up/down for running ECS containers.
upvoted 1 times
7 months ago
D is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: D
I think D is the right choice as they want application to be managed by different people which could be enabled by breaking it into different
containers
upvoted 1 times
8 months ago
Selected Answer: D
imho, it's D because "break the application into smaller applications" doesn't mean it has to be 'serverless'. Rather it can be divided into smaller
application running on containers.
upvoted 2 times
8 months ago
Selected Answer: A
I think A is the answer here, breaking into smaller pieces so lambda makes the most sense.
I don't see any restrictions in the question that forbids the usage of lambda
upvoted 2 times
7 months, 1 week ago
The reason for not choosing A: "The company wants to keep as much of the front-end code and the backend code as possible"
upvoted 4 times
Topic 1
Question #144
A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers
report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect nds that the
ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run.
What is the MOST cost-effective solution?
A. Migrate the monthly reporting to Amazon Redshift.
B. Migrate the monthly reporting to an Aurora Replica.
C. Migrate the Aurora database to a larger instance class.
D. Increase the Provisioned IOPS on the Aurora instance.
Correct Answer:
B
4 days, 14 hours ago
Selected Answer: B
B is correct because migrating the monthly reporting to an Aurora Replica can offload the reporting workload from the primary Aurora instance,
reducing the impact on its performance during large reports. Using an Aurora Replica provides scalability and allows the replica to handle the read-
intensive reporting queries, improving the overall performance of the ecommerce application.
A is wrong because migrating to Amazon Redshift introduces additional costs and complexity, and it may not be necessary to switch to a separate
data warehousing service for this specific issue.
C is wrong because simply increasing the instance class of the Aurora database may not be the most cost-effective solution if the performance
issue can be resolved by offloading the reporting workload to an Aurora Replica.
D is wrong because increasing the Provisioned IOPS alone may not address the issue of spikes in CPUUtilization during large reports, as it primarily
focuses on storage performance rather than overall database performance.
upvoted 1 times
1 month ago
By using an Aurora Replica for running large reports, the primary database will be relieved of the additional read load, improving performance for
the ecommerce application.
upvoted 1 times
1 month ago
Selected Answer: B
Option B is right answer.
upvoted 1 times
1 month, 2 weeks ago
Finally a question where there are no controversies
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: B
The most cost-effective solution for addressing high ReadIOPS and CPU utilization when running large reports would be to migrate the monthly
reporting to an Aurora Replica. An Aurora Replica is a read-only copy of an Aurora database that is updated in real-time with the primary database.
By using an Aurora Replica for running large reports, the primary database will be relieved of the additional read load, improving performance for
the ecommerce application. Option B, "Migrate the monthly reporting to an Aurora Replica," is the correct answer.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B: Migrating the monthly reporting to an Aurora Replica may be the most cost-effective solution because it involves creating a read-only
copy of the database that can be used specifically for running large reports without impacting the performance of the primary database. This
solution allows the company to scale the read capacity of the database without incurring additional hardware or I/O costs.
upvoted 3 times
6 months, 1 week ago
The incorrect solutions are:
Option A: Migrating the monthly reporting to Amazon Redshift may not be cost-effective because it involves creating a new data store and
potentially significant data migration and ETL costs.
Community vote distribution
B (100%)
Option C: Migrating the Aurora database to a larger instance class may not be cost-effective because it involves changing the underlying
hardware of the database and potentially incurring additional costs for the larger instance.
Option D: Increasing the Provisioned IOPS on the Aurora instance may not be cost-effective because it involves paying for additional I/O
capacity that may not be necessary for other workloads on the database.
upvoted 5 times
6 months, 1 week ago
Selected Answer: B
B is the best option
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
ReadIOPS issue inclining towards Read Replica as the most cost effective solution here
upvoted 4 times
7 months, 4 weeks ago
Answer B
upvoted 2 times
Topic 1
Question #145
A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses
a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The
application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the
application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?
A. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second
EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.
B. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2
On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the
instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.
D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template.
Create an Auto Scaling group with the launch template Con gure the launch template to use a Spot Fleet. Attach an Application Load Balancer
to the Auto Scaling group.
Correct Answer:
D
4 days, 14 hours ago
D is correct because migrating the database to Amazon Aurora provides better scalability and performance compared to Amazon RDS for MySQL.
Creating an AMI of the web application allows for easy replication of the application on multiple instances. Using a launch template and Auto
Scaling group with Spot Fleet provides cost optimization by leveraging spot instances. Adding an Application Load Balancer ensures the load is
distributed across the instances for seamless scaling.
A is incorrect because using an Application Load Balancer with multiple EC2 instances is a better approach for scalability compared to relying on a
single instance.
B is incorrect because weighted routing in Amazon Route 53 distributes traffic based on fixed weights, which may not dynamically adjust to the
changing load.
C is incorrect because using AWS Lambda to stop and change the instance type based on CPU utilization is not an efficient way to handle scaling
for a web application. Auto Scaling is a better approach for dynamic scaling.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: D
I was tempted to pick A but then I realized there are two key requirements:
- scale seamlessly
- cost-effectively
None of A-C give seamless scalability. A and B are about adding second instance (which I assume does not match to "scale seamlessly"). C is about
changing instance type.
Therefore D is only workable solution to the scalability requirement
upvoted 3 times
1 month, 3 weeks ago
Yup. Got me too. I picked A, saw D, and then reread the "scale seamlessly" part. D is correct.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
I wouldn't run my website on spot instances. Spot instances might be terminated at any time, and since I need to run analytics application it's not
an option for me. And using route 53 for load balancing of 2 instances is an overkill. I go with A.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: D
the options that said "launch a second EC2", have no sense ... why 2?, why not 3 or 4 or 5?
so options A and B drop.
C is no sense (Lambda doing this like a Scaling Group?, absurd)
Community vote distribution
D (79%)
A (21%)
Has to be D. Little extrange cos' Aurora is a very good solution, but NOT CHEAP (remember: cost-effectively).
To be honest, the most cost-effectively is B je je
upvoted 2 times
3 months, 1 week ago
A Spot Fleet is a set of Spot Instances and optionally On-Demand Instances that is launched based on criteria that you specify. The Spot Fleet
selects the Spot capacity pools that meet your needs and launches Spot Instances to meet the target capacity for the fleet. By default, Spot Fleets
are set to maintain target capacity by launching replacement instances after Spot Instances in the fleet are terminated. You can submit a Spot Fleet
as a one-time request, which does not persist after the instances have been terminated. You can include On-Demand Instance requests in a Spot
Fleet request.
upvoted 1 times
4 months, 2 weeks ago
Ans: D
Both Amazon RDS for MySQL and Amazon Aurora MySQL are designed to provide customers with fully managed relational database services, but
Amazon Aurora MySQL is designed to provide better performance, scalability, and reliability, making it a better option for customers who need
high-performance database services.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
Using an Auto Scaling group with a launch template and a Spot Fleet allows the company to scale the application seamlessly and cost-effectively,
by automatically adding or removing instances based on the demand, and using Spot instances which are spare compute capacity available in the
AWS region at a lower price than On-Demand instances. And also by migrating the database to Amazon Aurora MySQL DB instance, it provides
higher scalability, availability, and performance than traditional MySQL databases.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
The answer is D:
Migrate the database to Amazon Aurora MySQL - this will let the DB scale on it's own; it'll scale automatically without needing adjustment.
Create AMI of the web app and using a launch template - this will make the creating of any future instances of the app seamless. They can then be
added to the auto scaling group which will save them money as it will scale up and down based on demand.
Using a spot fleet to launch instances- This solves the "MOST cost-effective" portion of the question as spot instances come at a huge discount at
the cost of being terminated at any time Amazon deems fit. I think this is why there's a bit of disagreement on this. While it's the most cost
effective, it would be a terrible choice if amazon were to terminate that spot instance during a busy period.
upvoted 1 times
5 months, 3 weeks ago
But I have a question,
For Spot instance, is it possible that at some time there is no spot resources available at all? because it is not guaranteed, right?
upvoted 4 times
4 months, 2 weeks ago
Spot fleet not spot instance mentioned over there. Spot fleet = Spot instance + on-demand instance. If we cannot manage the spot instance
then we can use an on-demand instance.
upvoted 4 times
5 months, 4 weeks ago
Selected Answer: D
Option D is the most cost-effective solution that meets the requirements.
Migrating the database to Amazon Aurora MySQL will allow the database to scale automatically, so it can handle an increase in traffic without
manual intervention. Creating an AMI of the web application and using a launch template will allow the company to quickly and easily launch new
instances of the application, which can then be added to an Auto Scaling group. This will allow the application to automatically scale up and down
based on demand, ensuring that there are enough resources to handle busy times without incurring the cost of running idle resources.
Using a Spot Fleet to launch the instances will allow the company to take advantage of Amazon's spare capacity and get a discount on their EC2
instances. Attaching an Application Load Balancer to the Auto Scaling group will allow the load to be distributed across all of the available
instances, improving the performance and reliability of the application.
upvoted 3 times
6 months ago
Selected Answer: D
Option D is the most cost-effective solution because;
* it uses an Auto Scaling group with a launch template and a Spot Fleet to automatically scale the number of EC2 instances based on the workload.
* using a Spot Fleet allows the company to take advantage of the lower prices of Spot Instances while still providing the required performance and
availability for the application.
* using an Aurora MySQL database instance allows the company to take advantage of the scalability and performance of Aurora.
upvoted 2 times
6 months ago
D ,as only this has auto scaling
upvoted 1 times
6 months ago
ANSWER IS D
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
D is the right option. A is possible but it will have high cost due to on-demand instances. It is not mentioned that 24x7 application availability is
high priority goal.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
correct is D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
"You can submit a Spot Fleet as a one-time request, which does not persist after the instances have been terminated. You can include On-Demand
Instance requests in a Spot Fleet request."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html
upvoted 3 times
6 months, 2 weeks ago
Selected Answer: D
D. other answers don't deal with scaling.
upvoted 1 times
Topic 1
Question #146
A company runs a stateless web application in production on a group of Amazon EC2 On-Demand Instances behind an Application Load Balancer.
The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight.
Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?
A. Use Spot Instances for the entire workload.
B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capacity that the application needs.
C. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional capacity that the application needs.
D. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional capacity that the application needs.
Correct Answer:
B
Highly Voted
7 months, 4 weeks ago
Selected Answer: B
In the Question is mentioned that it has o Demand instances...so I think is more cheapest reserved and spot
upvoted 11 times
Highly Voted
6 months, 2 weeks ago
Answer is B: Reserved is cheaper than on demand the company has. And it's meet the availabilty (HA) requirement as to spot instance that can be
disrupted at any time.
PRICING BELOW.
On-Demand: 0% There’s no commitment from you. You pay the most with this option.
Reserved : 40%-60%1-year or 3-year commitment from you. You save money from that commitment.
Spot 50%-90% Ridiculously inexpensive because there’s no commitment from the AWS side.
upvoted 7 times
Most Recent
4 days, 14 hours ago
Selected Answer: B
B is correct because it combines the use of Reserved Instances and Spot Instances to minimize EC2 costs while ensuring availability. Reserved
Instances provide cost savings for the baseline level of usage during the heavy usage period, while Spot Instances are utilized for any additional
capacity needed during peak times, taking advantage of their cost-effectiveness.
A is incorrect because relying solely on Spot Instances for the entire workload can result in potential interruptions and instability during peak usage
periods.
C is incorrect because using On-Demand Instances for the baseline level of usage does not provide the cost savings and long-term commitment
benefits that Reserved Instances offer.
D is incorrect because using Dedicated Instances for the baseline level of usage incurs additional costs without significant benefits for this scenario.
Dedicated Instances are typically used for compliance or regulatory requirements rather than cost optimization.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: B
A company runs a stateless web application in production. This means that the application can be stopped and restarted again.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
Answer is D, you cannot guarantee availability with spot instances
upvoted 2 times
1 week, 6 days ago
The application is stateless.
upvoted 1 times
1 month, 2 weeks ago
Answer is D, you cannot guarantee availability with spot instances
upvoted 1 times
2 months, 1 week ago
Community vote distribution
B (75%)
13%
13%
Selected Answer: D
To make the application scale seamlessly, Option D is more suitable because it involves using Auto Scaling with Spot Fleet. Auto Scaling allows you
to automatically adjust the number of EC2 instances in response to changes in demand, while Spot Fleet allows you to request and maintain a fleet
of Spot Instances at a lower cost compared to On-Demand Instances.
upvoted 2 times
2 months, 4 weeks ago
strange, it wants a solution without affecting availability but has not given the right option.. spot instances cannot guarantee availability even at
night... or whatever...
upvoted 2 times
1 week, 6 days ago
The application is stateless.
upvoted 1 times
6 months ago
Selected Answer: B
Option B is the most cost-effective solution that meets the requirements.
* Using Reserved Instances for the baseline level of usage will provide a discount on the EC2 costs for steady overnight and weekend usage.
* Using Spot Instances for any additional capacity that the application needs during peak usage times will allow the company to take advantage of
spare capacity in the region at a lower cost than On-Demand Instances.
upvoted 4 times
6 months ago
Selected Answer: B
B is correct
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B is most cost effective without compromising the availability for baseline load.
upvoted 1 times
7 months, 1 week ago
B IS CORRECT
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
They are currently using On Demand instances, so option C is out.
A uses Spot instances which is not recommended for PROD and D uses Dedicated instances which are expensive.
So option B should be the one.
upvoted 4 times
7 months, 4 weeks ago
If we select B, Spot instance are reliable though it saves cost.
In D: base line & additional capacity is also On-Demand. Expensive than Reserve Instance but will not bring down Production
upvoted 3 times
7 months, 4 weeks ago
Selected Answer: C
I think C should be corrected.
upvoted 4 times
6 months ago
C costs more
upvoted 1 times
Topic 1
Question #147
A company needs to retain application log les for a critical application for 10 years. The application team regularly accesses logs from the past
month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?
A. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
B. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive.
C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
D. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep
Archive.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
Selected Answer: B
Why not AwsBackup? No Glacier Deep is supported by AWS Backup
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html
AWS Backup allows you to backup your S3 data stored in the following S3 Storage Classes:
• S3 Standard
• S3 Standard - Infrequently Access (IA)
• S3 One Zone-IA
• S3 Glacier Instant Retrieval
• S3 Intelligent-Tiering (S3 INT)
upvoted 6 times
7 months ago
AWS BackUp costs something, setting up S3 LCP doesn't.
upvoted 3 times
Most Recent
4 days, 14 hours ago
Selected Answer: B
B is the most cost-effective solution. Storing the logs in S3 and using S3 Lifecycle policies to transition logs older than 1 month to S3 Glacier Deep
Archive allows for cost optimization based on data access patterns. Since logs older than 1 month are rarely accessed, moving them to S3 Glacier
Deep Archive helps minimize storage costs while still retaining the logs for the required 10-year period.
A is incorrect because using AWS Backup to move logs to S3 Glacier Deep Archive can incur additional costs and complexity compared to using S3
Lifecycle policies directly.
C adds unnecessary complexity and costs by involving CloudWatch Logs and AWS Backup when direct management through S3 is sufficient.
D is incorrect because using S3 Lifecycle policies to move logs from CloudWatch Logs to S3 Glacier Deep Archive is not a valid option. CloudWatch
Logs and S3 have separate storage mechanisms, and S3 Lifecycle policies cannot be applied directly to CloudWatch Logs.
upvoted 1 times
5 months, 2 weeks ago
B is correct..
upvoted 1 times
6 months ago
Selected Answer: B
Option B (Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1-month-old to S3 Glacier Deep Archive) would meet
these requirements in the most cost-effective manner.
This solution would allow the application team to quickly access the logs from the past month for troubleshooting, while also providing a cost-
effective storage solution for the logs that are rarely accessed and need to be retained for 10 years.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B is most cost effective. Moving logs to Cloudwatch logs may incure additional cost.
upvoted 1 times
7 months, 1 week ago
Community vote distribution
B (100%)
B is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: B
S3 + Glacier is the most cost effective.
upvoted 3 times
7 months, 3 weeks ago
Selected Answer: B
D works, archive cloudwatch logs to S3 .... but is an additional service to pay for over B.
upvoted 1 times
6 months, 4 weeks ago
CloudWatch logs can't store around 10 TB of data per month I believe so both C and D options are ruled out already.
upvoted 1 times
7 months, 4 weeks ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/80772-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #148
A company has a data ingestion work ow that includes the following components:
An Amazon Simple Noti cation Service (Amazon SNS) topic that receives noti cations about new data deliveries
An AWS Lambda function that processes and stores the data
The ingestion work ow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested
unless the company manually reruns the job.
What should a solutions architect do to ensure that all noti cations are eventually processed?
A. Con gure the Lambda function for deployment across multiple Availability Zones.
B. Modify the Lambda function's con guration to increase the CPU and memory allocations for the function.
C. Con gure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.
D. Con gure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process
messages in the queue.
Correct Answer:
D
Highly Voted
8 months ago
Selected Answer: D
*ensure that all notifications are eventually processed*
upvoted 9 times
Most Recent
4 months, 1 week ago
Selected Answer: D
This is why https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html
upvoted 3 times
4 months, 3 weeks ago
C is not the right answer since after several retries SNS discard the message which doesn't align with the reqirement. D is the right answer
upvoted 2 times
4 months, 3 weeks ago
Best solution to process failed SNS notifications is using sns-dead-letter-queues (SQS Queue for reprocessing)
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
To ensure that all notifications are eventually processed, the best solution would be to configure an Amazon Simple Queue Service (SQS) queue as
the on-failure destination for the SNS topic. This will allow the notifications to be retried until they are successfully processed. The Lambda function
can then be modified to process messages in the queue, ensuring that all notifications are eventually processed. Option D, "Configure an Amazon
Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process messages in the queue," is the
correct answer.
upvoted 1 times
6 months ago
Selected Answer: D
I choose Option D as the correct answer.
To ensure that all notifications are eventually processed, the solutions architect can set up an Amazon SQS queue as the on-failure destination for
the Amazon SNS topic. This way, when the Lambda function fails due to network connectivity issues, the notification will be sent to the queue
instead of being lost. The Lambda function can then be modified to process messages in the queue, ensuring that all notifications are eventually
processed.
upvoted 2 times
6 months ago
Selected Answer: D
Option D to ensure that all notifications are eventually processed you need to use SQS.
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
Community vote distribution
D (83%)
C (17%)
Option C is right option.
SNS does not have any "On Failure" delivery destination. One need to configure dead-letter queue and configure SQS to read from there. So given
this option D is incorrect.
upvoted 2 times
6 months, 1 week ago
I don't think that's right
"A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers
successfully. Messages that can't be delivered due to client errors or server errors are held in the dead-letter queue for further analysis or
reprocessing" from https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html.
This is pretty much what is being described in D.
Plus C will only retry message processing, and network problems could still prevent the message from being processed, but the question states
"ensure that all notifications are eventually processed". So C does not meet the requirements but D does look to do this.
upvoted 4 times
6 months, 1 week ago
Selected Answer: D
Is correct.
upvoted 1 times
6 months, 1 week ago
If you want to ensure that all notifications are eventually processed you need to use SQS.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: D
C isnt specific. Hence D
upvoted 1 times
7 months, 1 week ago
Selected Answer: C
"on-failure destination" doesn't exist, only dead letter queue exist.
that's why I am leaning for C
upvoted 1 times
6 months, 3 weeks ago
Dead letter queue doesnt exist in SNS. They are specifically saying a new queue will be configured for failures from SNS. Hence D
upvoted 1 times
7 months, 1 week ago
D is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: D
D is the answer
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: D
Option C could work but the max retries attempts is 23 days. After that messages are deleted. And you do not want that to happen! So, Option D.
upvoted 4 times
8 months ago
Selected Answer: D
imho, D is the answer
upvoted 1 times
8 months, 2 weeks ago
Selected Answer: C
should be C:
https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html
upvoted 2 times
6 months, 3 weeks ago
And should D in this case. In the URL you referred, there is a statement as follows :- "With the exception of HTTP/S, you can't change Amazon
SNS-defined delivery policies. Only HTTP/S supports custom policies. See Creating an HTTP/S delivery policy." Hence you cant customise the
retry for Lamda and option D is more relevant
upvoted 1 times
Topic 1
Question #149
A company has a service that produces event data. The company wants to use AWS to process the event data as it is received. The data is written
in a speci c order that must be maintained throughout processing. The company wants to implement a solution that minimizes operational
overhead.
How should a solutions architect accomplish this?
A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process
messages from the queue.
B. Create an Amazon Simple Noti cation Service (Amazon SNS) topic to deliver noti cations containing payloads to process. Con gure an
AWS Lambda function as a subscriber.
C. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process
messages from the queue independently.
D. Create an Amazon Simple Noti cation Service (Amazon SNS) topic to deliver noti cations containing payloads to process. Con gure an
Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
Correct Answer:
A
4 days, 14 hours ago
A is the correct solution. By creating an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages and setting up an AWS
Lambda function to process messages from the queue, the company can ensure that the order of the event data is maintained throughout
processing. SQS FIFO queues guarantee the order of messages and are suitable for scenarios where strict message ordering is required.
B is incorrect because Amazon Simple Notification Service (Amazon SNS) topics are not designed to preserve message order. SNS is a publish-
subscribe messaging service and does not guarantee the order of message delivery.
C is incorrect because using an SQS standard queue does not guarantee the order of message processing. SQS standard queues provide high
throughput and scale, but they do not guarantee strict message ordering.
D is incorrect because configuring an SQS queue as a subscriber to an SNS topic does not ensure message ordering. SNS topics distribute
messages to subscribers independently, and the order of message processing is not guaranteed.
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: A
A is correct. Use FIFO to process in the specific order required
upvoted 2 times
3 months, 4 weeks ago
Selected Answer: A
Option A is correct...data is processed in the correct order
upvoted 1 times
6 months ago
Selected Answer: A
The correct solution is Option A. Creating an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages and setting up an AWS
Lambda function to process messages from the queue will ensure that the event data is processed in the correct order and minimize operational
overhead.
Option B is incorrect because using Amazon Simple Notification Service (Amazon SNS) does not guarantee the order in which messages are
delivered.
Option C is incorrect because using an Amazon SQS standard queue does not guarantee the order in which messages are processed.
Option D is incorrect because using an Amazon SQS queue as a subscriber to an Amazon SNS topic does not guarantee the order in which
messages are processed.
upvoted 3 times
6 months ago
Only A is right option here.
upvoted 1 times
6 months, 1 week ago
Community vote distribution
A (100%)
Selected Answer: A
Option A is the best option.
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
"The data is written in a specific order that must be maintained throughout processing" --> FIFO
upvoted 4 times
6 months, 1 week ago
Selected Answer: A
specific order = FIFO
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: A
Definitely A
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 3 weeks ago
Selected Answer: A
FIFO means order, so Option A.
upvoted 4 times
7 months, 4 weeks ago
Order --- means FIFO option A
upvoted 3 times
Topic 1
Question #150
A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a
solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more
than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time,
the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?
A. Create Amazon CloudWatch composite alarms where possible.
B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
C. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
D. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.
Correct Answer:
A
Highly Voted
8 months, 1 week ago
Selected Answer: A
Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm noise**. For
example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific conditions. You then can
set up your composite alarm to go into ALARM and send you notifications when the underlying metric alarms go into ALARM by configuring the
underlying metric alarms never to take actions. Currently, composite alarms can take the following actions:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html
upvoted 21 times
Most Recent
4 days, 14 hours ago
Selected Answer: A
By creating composite alarms in CloudWatch, the solutions architect can combine multiple metrics, such as CPU utilization and read IOPS, into a
single alarm. This allows the company to take action only when both conditions are met, reducing false alarms and focusing on meaningful alerts.
B can help in monitoring the overall health and performance of the application. However, it does not directly address the specific requirement of
triggering an action when CPU utilization and read IOPS exceed certain thresholds simultaneously.
C. Creating CloudWatch Synthetics canaries is useful for actively monitoring the application's behavior and availability. However, it does not directly
address the specific requirement of monitoring CPU utilization and read IOPS to trigger an action.
D. Creating single CloudWatch metric alarms with multiple metric thresholds where possible can be an option, but it does not address the
requirement of triggering an action only when both CPU utilization and read IOPS exceed their respective thresholds simultaneously.
upvoted 1 times
1 month ago
The composite alarm goes into ALARM state only if all conditions of the rule are met.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A, creating Amazon CloudWatch composite alarms, is correct because it allows the solutions architect to create an alarm that is triggered
only when both CPU utilization is above 50% and read IOPS on the disk are high at the same time. This meets the requirement to act as soon as
possible if both conditions are met, while also reducing the number of false alarms by ensuring that the alarm is triggered only when both
conditions are met.
upvoted 2 times
6 months, 1 week ago
The incorrect solutions are:
In contrast, Option B, creating Amazon CloudWatch dashboards, would not directly address the requirement to trigger an alarm when both CPU
utilization is high and read IOPS on the disk are high at the same time. Dashboards can be useful for visualizing metric data and identifying
trends, but they do not have the capability to trigger alarms based on multiple metric thresholds.
Option C, using Amazon CloudWatch Synthetics canaries, may not be the best choice for this scenario, as canaries are used for synthetic testing
rather than for monitoring live traffic. Canaries can be useful for monitoring the availability and performance of an application, but they may not
be the most effective way to monitor the specific metric thresholds and conditions described in this scenario.
upvoted 2 times
6 months, 1 week ago
Community vote distribution
A (100%)
Option D, creating single Amazon CloudWatch metric alarms with multiple metric thresholds, would not allow the solutions architect to
create an alarm that is triggered only when both CPU utilization and read IOPS on the disk are high at the same time. Instead, the alarm
would be triggered whenever any of the specified metric thresholds are exceeded, which may result in a higher number of false alarms.
upvoted 2 times
6 months, 1 week ago
A is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 2 weeks ago
The AWS::CloudWatch::CompositeAlarm type creates or updates a composite alarm. When you create a composite alarm, you specify a rule
expression for the alarm that takes into account the alarm states of other alarms that you have created. The composite alarm goes into ALARM
state only if all conditions of the rule are met.
The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms.Using composite alarms can
reduce alarm noise.
upvoted 3 times
7 months, 1 week ago
A is correct
upvoted 1 times
Topic 1
Question #151
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use
only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet.
Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-
northeast-3.
B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
C. Use AWS Organizations to con gure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all
AWS Regions except ap-northeast-3.
D. Create an outbound rule for the network ACL in each VPC to deny all tra c from 0.0.0.0/0. Create an IAM policy for each user to prevent the
use of any AWS Region other than ap-northeast-3.
E. Use AWS Con g to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed
outside of ap-northeast-3.
Correct Answer:
AC
Highly Voted
8 months ago
Selected Answer: AC
agree with A and C
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_vpc.html#example_vpc_2
upvoted 14 times
Highly Voted
7 months, 2 weeks ago
https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-requirements/
*Disallow internet access for an Amazon VPC instance managed by a customer
upvoted 9 times
7 months, 2 weeks ago
Option A and C
upvoted 2 times
7 months, 2 weeks ago
*You can use data-residency guardrails to control resources in any AWS Region.
upvoted 1 times
Most Recent
4 days, 14 hours ago
Selected Answer: AC
A. By using Control Tower, the company can enforce data residency guardrails and restrict internet access for VPCs and denies access to all Regions
except the required ap-northeast-3 Region.
C. With Organizations, the company can configure SCPs to prevent VPCs from gaining internet access. By denying access to all Regions except ap-
northeast-3, the company ensures that VPCs can only be deployed in the specified Region.
Option B is incorrect because using rules in AWS WAF alone does not address the requirement of denying access to all AWS Regions except ap-
northeast-3.
Option D is incorrect because configuring outbound rules in network ACLs and IAM policies for users can help restrict traffic and access, but it does
not enforce the company's requirement of denying access to all Regions except ap-northeast-3.
Option E is incorrect because using AWS Config and managed rules can help detect and alert for specific resources and configurations, but it does
not directly enforce the restriction of internet access or deny access to specific Regions.
upvoted 1 times
2 weeks, 4 days ago
Didn't know that SCPS (Service Control Policies) could be used to deny users internet access. Good to know. Always thought it's got controlling
who can and can't access AWS Services.
upvoted 1 times
2 months ago
Agree with Aand C
https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-requirements/
Community vote distribution
AC (57%)
CE (20%)
13%
10%
upvoted 1 times
2 months, 2 weeks ago
I choose C and D.
For control tower, it can't be A because ap-northeast-3 doesn't support it!
Also, in the case of E, it is detection and warning, so it is difficult to prevent internet connection (although the view is a little obscure).
upvoted 1 times
1 month, 3 weeks ago
I just check, now it's supported!!!
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: AC
A and C
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: CD
C/D
A - CANNOT BE!!! AWS Control Tower is not available in ap-northeast-3! Check your
B- for sure no
C - SCPS (Service Control Policies)- For sure
D - Deny outbound rule to be place in prod and also IAM Policy to deny Users creating services in AP-Northeast3
E - it creates an alert, which means it happens but an alert is triggered. so I think it's not good either.
upvoted 2 times
2 months, 1 week ago
False, Control Tower is in Osaka NorthEast 3
https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: CD
Control tower isn't available in AP-northeast-3 (only available in ap-northeast1 and 2 : https://www.aws-services.info/controltower.html)
For answer E, it creates an alert, wich means it happens but an alert is triggered. so i think it's not good either.
That's why i would go for C and D
upvoted 2 times
1 month ago
It's availabe now on the same tink u pasted in earlier: ap-northeast-3 Asia Pacific (Osaka) 2023-04-20.
upvoted 1 times
2 months, 1 week ago
same page you posted:
ap-northeast-3 Asia Pacific (Osaka) 2023-04-20 https://aws.amazon.com/controltower
upvoted 1 times
2 months, 1 week ago
False, Control Tower is in Osaka NorthEast 3
https://docs.aws.amazon.com/controltower/latest/userguide/region-how.html
upvoted 1 times
3 months, 1 week ago
Selected Answer: CE
AWS Control tower is not available in ap-northeast-3!
https://www.aws-services.info/controltower.html
upvoted 1 times
3 months, 1 week ago
What's wrong with B?
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: CE
A - CANNOT BE!!! AWS Control Tower is not available in ap-northeast-3! Check your consolle.
upvoted 4 times
4 months ago
From ChatGPT :)
Control Tower: Can
Yes, AWS Control Tower can implement data residency guardrails to deny internet access and restrict access to AWS Regions except for one.
To restrict access to AWS regions, you can create a guardrail using AWS Organizations to deny access to all AWS regions except for the one that
you want to allow. This can be done by creating an organizational policy that restricts access to specific AWS services and resources based on
region.
Config: Can(not).
Yes, AWS Config can help you enforce restrictions on internet access and control access to specific AWS Regions using AWS Config Rules.
It's worth noting that AWS Config is a monitoring service that provides continuous assessment of your AWS resources against desired
configurations. While AWS Config can alert you when a configuration change occurs, it cannot directly restrict access to resources or enforce
specific policies. For that, you may need to use other AWS services such as AWS Identity and Access Management (IAM), AWS Firewall Manager, or
AWS Organizations.
upvoted 3 times
4 months, 2 weeks ago
Option A uses AWS Control Tower to implement data residency guardrails, but it does not prevent internet access by itself. It only denies access to
all AWS Regions except ap-northeast-3. The requirement states that administrators are not permitted to connect VPCs to the internet, so Option A
does not meet this requirement.
upvoted 2 times
5 months, 1 week ago
Selected Answer: CE
Option A is not a valid solution because AWS Control Tower is a service that helps customers set up and govern a new, secure, multi-account AWS
environment based on best practices. It does not provide specific guardrails that would prevent internet access or restrict access to a specific
region.
Option C is a valid solution because AWS Organizations can be used to configure service control policies (SCPs) that can prevent VPCs from gaining
internet access, and this can be done by denying access to all AWS Regions except ap-northeast-3.
Option E is also a valid solution because AWS Config can be used to activate managed rules to detect and alert for internet gateways and to detect
and alert for new resources deployed outside of ap-northeast-3. This can help to ensure compliance with the company's requirements to prevent
internet access and to limit access to a specific region.
upvoted 1 times
4 months, 3 weeks ago
The most interesting guardrail is probably the one denying access to AWS based on the requested AWS Region. I choose it from the list and find
that it is different from the other guardrails because it affects all Organizational Units (OUs) and cannot be activated here but must be activated
in the landing zone settings.
https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-
requirements/#:~:text=AWS%20Control%20Tower%20also%20offers,the%20creation%20of%20internet%20gateway
upvoted 1 times
5 months, 3 weeks ago
C and E
To meet the requirements of not allowing VPCs to connect to the internet and limiting the AWS Region to ap-northeast-3, you can use the
following solutions:
C: Use AWS Organizations to configure service control policies (SCPs) that prevent VPCs from gaining internet access. Deny access to all AWS
Regions except ap-northeast-3. This will ensure that VPCs cannot access the internet and can only be created in the ap-northeast-3 Region.
E: Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside
of ap-northeast-3. This will allow you to monitor for any attempts to connect VPCs to the internet or to deploy resources outside of the ap-
northeast-3 Region, and alert you if any such attempts are detected.
upvoted 1 times
5 months, 1 week ago
Not E. "Company administrators are not permitted...", an alert detect a connection an send an alert, not prevent the connection
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: AD
You can now use AWS Control Tower guardrails to deny services and operations for AWS Region(s) of your choice in your AWS Control Tower
environments. The Region deny capabilities complement existing AWS Control Tower Region selection and Region deselection features, providing
you with the capabilities to address compliance and regulatory requirements while improving cost efficiency of expanding into additional Regions.
Along with the Region Deny feature, a set of data residency guardrails are released to help customers with data residency requirements. You can
use these guardrails to choose the AWS Region that is in your desired location and have complete control and ownership over the region in which
your data is physically located, making it easy to meet regional compliance and data residency requirements. https://controltower.aws-
management.tools/security/restrict_regions/
upvoted 3 times
5 months, 3 weeks ago
I mean A and C not D. Please allow editing post after submitted
upvoted 1 times
Topic 1
Question #152
A company uses a three-tier web application to provide training to new employees. The application is accessed for only 12 hours every day. The
company is using an Amazon RDS for MySQL DB instance to store information and wants to minimize costs.
What should a solutions architect do to meet these requirements?
A. Con gure an IAM policy for AWS Systems Manager Session Manager. Create an IAM role for the policy. Update the trust relationship of the
role. Set up automatic start and stop for the DB instance.
B. Create an Amazon ElastiCache for Redis cache cluster that gives users the ability to access the data from the cache when the DB instance
is stopped. Invalidate the cache after the DB instance is started.
C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS. Attach the role to the EC2 instance. Con gure a
cron job to start and stop the EC2 instance on the desired schedule.
D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge (Amazon CloudWatch Events) scheduled rules
to invoke the Lambda functions. Con gure the Lambda functions as event targets for the rules.
Correct Answer:
D
Highly Voted
8 months ago
https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/
It is option D. Option A could have been applicable had it been AWS Systems Manager State Manager & not AWS Systems Manager Session
Manager
upvoted 25 times
Highly Voted
8 months, 1 week ago
Selected Answer: A
A is true for sure. "Schedule Amazon RDS stop and start using AWS Systems Manager" Steps in the documentation:
1. Configure an AWS Identity and Access Management (IAM) policy for State Manager.
2. Create an IAM role for the new policy.
3. Update the trust relationship of the role so Systems Manager can use it.
4. Set up the automatic stop with State Manager.
5. Set up the automatic start with State Manager.
https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-systems-manager/
upvoted 8 times
7 months, 3 weeks ago
Option A refers to Session Manager, not State Manager as you pointed, so it is wrong. Option D is valid.
upvoted 6 times
7 months, 3 weeks ago
Agree A, free to use state manager within limits, and don't need to code or manage lambda.
upvoted 1 times
8 months ago
Look like State manager and Session manager use for difference purpose even both in same dashboard console.
upvoted 1 times
8 months ago
And ofcause, D is working, so if A also right, the question is wrong.
upvoted 3 times
Most Recent
4 days, 13 hours ago
Selected Answer: D
By using AWS Lambda functions triggered by Amazon EventBridge scheduled rules, the company can automate the start and stop actions for the
Amazon RDS for MySQL DB instance based on the 12-hour access period. This allows them to minimize costs by only running the DB instance
when it is needed.
Option A is not the most suitable solution because it refers to IAM policies for AWS Systems Manager Session Manager, which is primarily used for
interactive shell access to EC2 instances and does not directly address the requirement of starting and stopping the DB instance.
Option B is not the most suitable solution because it suggests using Amazon ElastiCache for Redis as a cache cluster, which may not provide the
desired cost optimization for the DB instance.
Community vote distribution
D (76%)
A (24%)
Option C is not the most suitable solution because launching an EC2 instance and configuring cron jobs to start and stop it does not directly
address the requirement of minimizing costs for the Amazon RDS DB instance.
upvoted 1 times
1 month ago
Selected Answer: D
I got this question in real exam!
upvoted 2 times
5 days, 22 hours ago
why we need more than one lambda function to start and stop DB instance? btw how many questions came from this site?
upvoted 1 times
1 month, 2 weeks ago
State Manager, a capability of AWS Systems Manager
upvoted 1 times
2 months ago
Selected Answer: D
Option D is correct
upvoted 2 times
2 months, 1 week ago
Selected Answer: D
In a typical development environment, dev and test databases are mostly utilized for 8 hours a day and sit idle when not in use. However, the
databases are billed for the compute and storage costs during this idle time. To reduce the overall cost, Amazon RDS allows instances to be
stopped temporarily. While the instance is stopped, you’re charged for storage and backups, but not for the DB instance hours. Please note that a
stopped instance will automatically be started after 7 days.
This post presents a solution using AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and start the idle
databases with specific tags to save on compute costs. The second post presents a solution that accomplishes stop and start of the idle Amazon
RDS databases using AWS Systems Manager.
upvoted 2 times
3 months, 1 week ago
Selected Answer: D
https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-ref-rds.html
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and start the idle databases with specific tags to
save on compute costs. https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: D
The correct answer is D. Creating AWS Lambda functions to start and stop the DB instance and using Amazon EventBridge (Amazon CloudWatch
Events) scheduled rules to invoke the Lambda functions is the most cost-effective way to meet the requirements. The Lambda functions can be
configured as event targets for the scheduled rules, which will allow the DB instance to be started and stopped on the desired schedule.
upvoted 4 times
6 months, 1 week ago
Selected Answer: D
Its D. confirmed via others exam test pages
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Option D is the best option. Session Manager access can not be used to start and stop DB instances.It is used for the Brower based SSH access to
instances.
upvoted 2 times
7 months, 3 weeks ago
Selected Answer: D
Option D is the one. Option A could be as well if it referred to State Manager instead of Session Manager.
upvoted 5 times
7 months, 4 weeks ago
Selected Answer: D
I think A or D but D is cheaper (mimimize costs) because you pay Lambda only if you use it.
upvoted 1 times
7 months, 4 weeks ago
I think A or D but D is cheaper (mimimize costs) because you pay Lambda only if you use it.
upvoted 2 times
8 months ago
Selected Answer: D
voted d
upvoted 2 times
8 months ago
Selected Answer: D
Vote D
upvoted 3 times
Topic 1
Question #153
A company sells ringtones created from clips of popular songs. The les containing the ringtones are stored in Amazon S3 Standard and are at
least 128 KB in size. The company has millions of les, but downloads are infrequent for ringtones older than 90 days. The company needs to
save money on storage while keeping the most accessed les readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Con gure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the les to S3 Intelligent-Tiering and con gure it to move objects to a less expensive storage tier after 90 days.
C. Con gure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90
days.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
Selected Answer: D
Answer D
Why Optoin D ?
The Question talks about downloads are infrequent older than 90 days which means files less than 90 days are accessed frequently. Standard-
Infrequent Access (S3 Standard-IA) needs a minimum 30 days if accessed before, it costs more.
So to access the files frequently you need a S3 Standard . After 90 days you can move it to Standard-Infrequent Access (S3 Standard-IA) as its
going to be less frequently accessed
upvoted 28 times
Highly Voted
7 months, 1 week ago
Selected Answer: B
B/D seems possible answer. But, I'll go with "B".
In the following table, S3 Intelligent-Tiering seems not so expansive than S3 Standard.
https://aws.amazon.com/s3/pricing/?nc1=h_ls
And, in the question "128KB" size is talking about S3 Intelligent-Tiering stuff.
upvoted 11 times
1 month, 1 week ago
have you tried to implement B? how do you configure Intelligent Tiering to move objects to a less expensive storage tier after 90 days? and
which storage tier is this 'less expensive' ? the answer is clearly wrong ... correct answer is D
upvoted 1 times
6 months, 3 weeks ago
S3 Intelligent tiering is used when the access frequency is not known. I think 128KB is a deflector.
upvoted 5 times
6 months ago
also, there are probably several ringtones which aren't popular/used. Why keep them in S3 standard? The company would save money if s3
intelligent-tiering moves the unpopular ringtones to a more cost-effective tier than s3 standard.
upvoted 1 times
7 months ago
This link also has me going with “B.” Specifying 128 KB in size is not a coincidence. https://aws.amazon.com/s3/storage-classes/intelligent-
tiering/
upvoted 3 times
6 months, 3 weeks ago
because of tha link it is D.
There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller than
128 KB are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be charged at the Frequent Access tier
upvoted 1 times
6 months, 3 weeks ago
oh sorry it states objects are bigger than 128 KB. B is correct
upvoted 1 times
Most Recent
4 days, 13 hours ago
Selected Answer: B
Community vote distribution
D (61%)
B (39%)
By using S3 IT, the company can take advantage of automatic cost optimization. IT moves objects between two access tiers: frequent access and
infrequent access. In this case, since downloads for ringtones older than 90 days are infrequent, IT will automatically move those objects to the less
expensive infrequent access tier, reducing storage costs while keeping the most accessed files readily available.
A is not the most cost-effective solution because it doesn't consider the requirement of keeping the most accessed files readily available. S3
Standard-IA is designed for data that is accessed less frequently, but it still incurs higher costs compared to IT.
C is not the most suitable solution for reducing storage costs. S3 inventory provides a list of objects and their metadata, but it does not offer direct
cost optimization features.
D is not the most cost-effective solution because it only moves objects from S3 Standard to S3 Standard-IA after 90 days. It doesn't take advantage
of the benefits of IT, which automatically optimizes costs based on access patterns.
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: D
128kB is a just a trap.
It cannot be B because:
1. Intelligent-tiering requires no configuration for class transitions - your option is just whether to opt into Archive/Deep Archive Access tier, which
does not make sense for the requirement. Those two classes are cheapest in terms of storage but charges high for retrieval.
2. Nowhere has it mentioned that the access pattern is unpredictable. If we really have to assume, I would rather assume that new songs have
higher access frequency. In this case, you dont really benefit from the auto-transition feature that Intel-tier provides. You will be paying the same
rate as S3 Standard class + additional fee for using Intel-tiering. Since the req is to have the most cost-efficient solution, D is the answer.
upvoted 1 times
3 weeks, 2 days ago
To add to my point above, for intel-tiering to move a file from:
Frequent tier > Infrequent tier - requires object to not be accessed for 30 consecutive days
Infrequent tier > Archive/Deep Archive - requires object to not be accessed for 90 days and above.
Can one guarantee that a new song will not be downloaded for 30 consecutive days in order to take advantage of intel-tier's automated storage
class transition? Even if that's the case, there is nothing that user need to "configure".. B would only be a valid solution if the configuration part
is taken out.
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 1 times
1 month ago
Selected Answer: B
S3 Intelligent-Tiering is designed to optimize costs by automatically moving objects between two access tiers: frequent access and infrequent
access. By moving the files to S3 Intelligent-Tiering, the company can take advantage of the automatic tiering feature to save costs on storage.
Initially, the files will be stored in the frequent access tier for quick and easy access. However, since downloads for ringtones older than 90 days are
infrequent, after that period, the objects will automatically be moved to the infrequent access tier, which offers a lower storage cost compared to
the frequent access tier
upvoted 1 times
1 month ago
In the question it mentions that the files are stored in S3 Standard. So you need to transition them from S3 standard using S3 Lifecycle policy that
moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
are at least 128 KB in size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to
save money on storage while keeping the most accessed files readily available for its users. -- means some most accessed files are can be more
than 90 days old. so should go with intelligent tiering as the patterns are unpredictable
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: B
Answer should be B.
S3 Standard and S3 Intelligent - Tiering are both $0.023 per GB per month.
However S3 Standard - Infrequent Access is $0.0125 per GB while S3 Intelligent - Tiering Archive Access Tier is $0.0036 per GB. S3 Intelligent -
Tiering Deep Archive Access Tier is even cheaper at $0.00099 per GB. Thus the answer is B.
upvoted 1 times
2 months ago
Selected Answer: D
I vote for option D
B is saying that it will move to less expensive storage which can be also Glacier but this does not fill requirements of the question
upvoted 2 times
2 months ago
Selected Answer: D
D is more cost effective and pattern is known
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
S3 Intelligent Tiering
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
For S3 Intelligent-Tiering, objects smaller than 128 KB are not eligible for auto tiering. These smaller objects may be stored, but they’ll always be
charged at the Frequent Access tier rates and don’t incur the monitoring and automation charge. The question says "The files containing the
ringtones are stored in Amazon S3 Standard and are at least 128 KB in size", so all files will get the benefit from S3 Intelligent-Tiering. The question
also says "while keeping the most accessed files readily available for its users.", consequently, B is the best choice
upvoted 2 times
2 months, 2 weeks ago
i STAND CORRECTED ANSWER -B :Based on the given information, the company can use Amazon S3 Intelligent-Tiering for storing its files
containing ringtones. Since the files are at least 128 KB in size, they will not incur any minimum object storage charges. Additionally, the company
can take advantage of the auto-tiering feature of S3 Intelligent-Tiering, which automatically moves objects between different storage tiers based
on their access patterns. This can help reduce storage costs by moving infrequently accessed files to the lower-cost tiers.
However, it is important to note that objects smaller than 128KB are not eligible for auto-tiering in S3 Intelligent-Tiering. Therefore, if the company
has any files smaller than 128KB, they should continue to store them in Amazon S3 Standard.
upvoted 1 times
2 months, 2 weeks ago
ANSWER -D :S3 Standard-IA is designed for larger objects and has a minimum object storage charge of 128KB. Objects smaller than 128KB in size
will incur storage charges as if the object were 128KB. For example, a 6KB object in S3 Standard-IA will incur S3 Standard-IA storage charges for
6KB and an additional minimum object size charge equivalent to 122KB at the S3 Standard-IA storage price. See the Amazon S3 pricing page for
information about S3 Standard-IA pricing.
There is no minimum object size for S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These smaller objects
may be stored in S3 Intelligent-Tiering, but will always be charged at the Frequent Access tier rates, and are not charged the monitoring and
automation charge.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
*B* makes more sense because you will not have to wait for 90 days to save costs for the ringtones that do not perform
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: D
Why not B? There are no retrieval charges in S3 Intelligent-Tiering. S3 Intelligent-Tiering has no minimum eligible object size, but objects smaller
than **128 KB** are not eligible for auto tiering. *** These smaller objects may be stored, but they’ll always be charged at the Frequent Access tier
**** rates and don’t incur the monitoring and automation charge.
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 2 times
3 months ago
Selected Answer: B
"objects smaller than 128KB are not eligible for auto-tiering": So B makes more sense. Since Intelligent tiering applies for 128KB+ files(atleast).
upvoted 1 times
Topic 1
Question #154
A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new les
and must restrict all other users to read-only access. No users can have the ability to modify or delete any les in the repository. The company
must keep every le in the repository for a minimum of 1 year after its creation date.
Which solution will meet these requirements?
A. Use S3 Object Lock in governance mode with a legal hold of 1 year.
B. Use S3 Object Lock in compliance mode with a retention period of 365 days.
C. Use an IAM role to restrict all users from deleting or changing objects in the S3 bucket. Use an S3 bucket policy to only allow the IAM role.
D. Con gure the S3 bucket to invoke an AWS Lambda function every time an object is added. Con gure the function to track the hash of the
saved object so that modi ed objects can be marked accordingly.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
Answer : B
Reason: Compliance Mode. The key difference between Compliance Mode and Governance Mode is that there are NO users that can override the
retention periods set or delete an object, and that also includes your AWS root account which has the highest privileges.
upvoted 16 times
1 month ago
Compliance mode controls the object life span after creation.
how this option restricts all scientists from adding new file? please explain.
upvoted 2 times
5 months, 4 weeks ago
How about: The repository must allow a few scientists to add new files
upvoted 1 times
5 months, 3 weeks ago
Adding is not the same as changing :)
upvoted 7 times
Most Recent
4 days, 13 hours ago
Selected Answer: B
S3 Object Lock provides the necessary features to enforce immutability and retention of objects in an S3. Compliance mode ensures that the
locked objects cannot be deleted or modified by any user, including those with write access. By setting a retention period of 365 days, the
company can ensure that every file in the repository is kept for a minimum of 1 year after its creation date.
A does not provide the same level of protection as compliance mode. In governance mode, there is a possibility for authorized users to remove the
legal hold, potentially allowing objects to be modified or deleted.
C can restrict users from deleting or changing objects, but it does not enforce the retention period requirement. It also does not provide the same
level of immutability and protection against accidental or malicious modifications.
D does not address the requirement of preventing users from modifying or deleting files. It provides a mechanism for tracking changes but does
not enforce the desired access restrictions or retention period.
upvoted 1 times
1 month ago
Selected Answer: B
B,
The key is "No users can have the ability to modify or delete any files" and compliance mode supports that.
I remember it this way: ( governance is like government, they set the rules but they can allow some people to break it :D )
upvoted 3 times
1 month, 1 week ago
Am I the only one to worry about leap years ?
upvoted 1 times
1 month, 4 weeks ago
Selected Answer: B
Community vote distribution
B (77%)
A (23%)
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account. When an
object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened. Compliance mode helps
ensure that an object version can't be overwritten or deleted for the duration of the retention period.
In governance mode, users can't overwrite or delete an object version or alter its lock settings unless they have special permissions. With
governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention
settings or delete the object if necessary.
In Governance mode, Objects can be deleted by some users with special permissions, this is against the requirement.
upvoted 2 times
2 months, 1 week ago
Selected Answer: B
its B, legal hold has no retention
upvoted 3 times
2 months, 1 week ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html
upvoted 1 times
2 months, 4 weeks ago
Both Compliance & Governance mode protect objects against being deleted or changed. But in Governance mode some people can have special
permissions. In this question, no user can delete or modify files; so the answer is Compliance mode only. Neither of these modes restrict user from
adding new files.
upvoted 2 times
5 months ago
B. Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: B
users can have the ability to modify or delete any files in the repository ==> Compliance Mode
upvoted 1 times
5 months, 3 weeks ago
users cannot have the ability to modify or delete any files in the repository ==> Compliance Mode
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: A
B would also meet the requirement to keep every file in the repository for at least 1 year after its creation date, as you can specify a retention
period of 365 days. However, it would not meet the requirement to restrict all users except a few scientists to read-only access. S3 Object Lock in
compliance mode only allows you to specify retention periods and does not have any options for controlling access to objects in the bucket.
To meet all the requirements, you should use S3 Object Lock in governance mode and use IAM policies to control access to the objects in the
bucket. This would allow you to specify a legal hold with a retention period of at least 1 year and to restrict all users except a few scientists to read-
only access.
upvoted 3 times
2 months, 3 weeks ago
Legal hold needs to be removed manually.
"The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold
prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in
effect until removed. "
upvoted 1 times
6 months ago
Selected Answer: B
No users can have the ability to modify or delete any files in the repository. hence it must be compliance mode.
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Answer is B
Compliance:
- Object versions can't be overwritten or deleted by any user, including the root user
- Objects retention modes can't be changed, and retention periods can't be shortened
Governance:
- Most users can't overwrite or delete an object version or alter its lock settings
- Some users have special permissions to change the retention or delete the object
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
B is best answer but I feel none of the answers covers the requirement for only few users(scientiest) are able to upload(create) the file in the bucket
and all other users has Read only access.
upvoted 3 times
6 months, 1 week ago
It is B per "No users can have the ability to modify or delete any files in the repository. ". Compliance mode supports that requirement whereas
Governance mode does not as defined via https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html.
upvoted 1 times
7 months ago
Selected Answer: A
ANSWER IS DEFINITELY A
upvoted 1 times
5 months, 3 weeks ago
Why is it not B?
upvoted 1 times
7 months, 1 week ago
B i think. im not sure..thougts?
upvoted 1 times
Topic 1
Question #155
A large media company hosts a web application on AWS. The company wants to start caching con dential media les so that users around the
world will have reliable access to the les. The content is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless
of where the requests originate geographically.
Which solution will meet these requirements?
A. Use AWS DataSync to connect the S3 buckets to the web application.
B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application.
C. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers.
D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
Correct Answer:
C
Highly Voted
7 months, 4 weeks ago
key :caching
Option C
upvoted 9 times
Most Recent
4 days, 13 hours ago
Selected Answer: C
CloudFront is a content delivery network (CDN) service provided by AWS. It caches content at edge locations worldwide, allowing users to access
the content quickly regardless of their geographic location. By connecting the S3 to CloudFront, the media files can be cached at edge locations,
ensuring reliable and fast delivery to users.
A. is a data transfer service that is not designed for caching or content delivery. It is used for transferring data between on-premises storage
systems and AWS services.
B. is a service that improves the performance and availability of applications for global users. While it can provide fast and reliable access, it is not
specifically designed for caching media files or connecting directly to S3.
D. is a message queue service that is not suitable for caching or content delivery. It is used for decoupling and coordinating message-based
communication between different components of an application.
Therefore, the correct solution is option C, deploying CloudFront to connect the S3 to CloudFront edge servers.
upvoted 1 times
2 weeks ago
Global Accelerator does not support Edge Caching
upvoted 1 times
1 month ago
Selected Answer: C
Option C is correct answer.
upvoted 1 times
3 months, 1 week ago
As far as I understand, Global Accelerator does not have caching features, so CloudFront would be the recommended service for that purpose
upvoted 1 times
4 months, 1 week ago
Selected Answer: C
C correto
upvoted 1 times
5 months ago
C, Caching == Edge location == CloudFront
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
C right answer
upvoted 2 times
Community vote distribution
C (100%)
6 months, 2 weeks ago
Selected Answer: C
Agreed
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
Answer is C
upvoted 1 times
Topic 1
Question #156
A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and
application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the
incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business
intelligence tool to show key performance indicators (KPIs).
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
A. Use Amazon Athena for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
B. Use Amazon Kinesis Data Analytics for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
C. Create custom AWS Lambda functions to move the individual records from the databases to an Amazon Redshift cluster.
D. Use an AWS Glue extract, transform, and load (ETL) job to convert the data into JSON format. Load the data into multiple Amazon
OpenSearch Service (Amazon Elasticsearch Service) clusters.
E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake. Use AWS Glue to crawl the source, extract
the data, and load the data into Amazon S3 in Apache Parquet format.
Correct Answer:
AC
Highly Voted
8 months, 1 week ago
Selected Answer: AE
I believe AE makes the most sense
upvoted 9 times
Highly Voted
8 months ago
Selected Answer: AE
yeah AE makes sense, only E is working with S3 here and questions wants them to be in S3
upvoted 8 times
Most Recent
1 week ago
@Golcha once the data comes from different sources then you use GLUE
upvoted 1 times
1 month ago
Selected Answer: AC
Less Overhead with option AC .No need to manage
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: AC
No specific use case for GLUE
upvoted 1 times
1 week ago
once the data comes from different sources then you use GLUE
upvoted 1 times
2 months, 2 weeks ago
The Apache Parquet format is a performance-oriented, column-based data format designed for storage and retrieval. It is generally faster for reads
than writes because of its columnar storage layout and a pre-computed schema that is written with the data into the files. AWS Glue’s Parquet
writer offers fast write performance and flexibility to handle evolving datasets. You can use AWS Glue to read Parquet files from Amazon S3 and
from streaming sources as well as write Parquet files to Amazon S3. When using AWS Glue to build a data lake foundation, it automatically crawls
your Amazon S3 data, identifies data formats, and then suggests schemas for use with other AWS analytic services[1][2][3][4].
upvoted 1 times
2 months, 2 weeks ago
ANSWER - AE:Amazon Athena is the best choice for running one-time queries on streaming data. Although Amazon Kinesis Data Analytics
provides an easy and familiar standard SQL language to analyze streaming data in real-time, it is designed for continuous queries rather than one-
time queries[1]. On the other hand, Amazon Athena is a serverless interactive query service that allows querying data in Amazon S3 using SQL. It is
optimized for ad-hoc querying and is ideal for running one-time queries on streaming data[2].AWS Lake Formation uses as a central place to have
all your data for analytics purposes (E). Athena integrate perfect with S3 and can makes queries (A).
upvoted 1 times
Community vote distribution
AE (81%)
Other
2 months, 3 weeks ago
Selected Answer: AE
AWS Lake Formation uses as a central place to have all your data for analytics purposes (E). Athena integrate perfect with S3 and can makes queries
(A).
upvoted 2 times
2 months, 3 weeks ago
Why S3 in Apache Parquet? https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-s3-announces-parquet-output-format-for-
inventory/
upvoted 1 times
4 months, 2 weeks ago
Can anyone please explain me why B cannot be an answer?
upvoted 3 times
2 months, 1 week ago
Kinesis Data Analytics is designed for continuous queries rather than one-time queries.
upvoted 3 times
5 months ago
can anyone help me in below question
36. A company has a Java application that uses Amazon Simple Queue Service (Amazon SOS) to parse messages. The application cannot parse
messages that are large on 256KB size. The company wants to implement a solution to give the application the ability to parse messages as large
as 50 MB.
Which solution will meet these requirements with the FEWEST changes to the code?
a) Use the Amazon SOS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
b) Use Amazon EventBridge to post large messages from the application instead of Aaron SOS
c) Change the limit in Amazon SQS to handle messages that are larger than 256 KB
d) Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS) Configure Amazon SQS to reference this location in the
messages.
upvoted 1 times
4 months ago
I will do "A" as well.
upvoted 1 times
5 months ago
A would probably be the best answer. Sqs extended client library is for Java apps.
upvoted 1 times
5 months, 1 week ago
Selected Answer: DE
I believe DE makes the most sense
upvoted 1 times
5 months, 1 week ago
Selected Answer: AE
stored in s3 -> data lake -> athena (process the SQL parquet format)-> quicksight visualize
upvoted 4 times
5 months, 4 weeks ago
Selected Answer: BE
While Amazon Athena is a fully managed service that makes it easy to analyze data stored in Amazon S3 using SQL, it is primarily designed for
running ad-hoc queries on data stored in Amazon S3. It may not be the best choice for running one-time queries on streaming data, as it is not
designed to process data in real-time.
Additionally, using Amazon Athena for one-time queries on streaming data could potentially lead to higher operational overhead, as you would
need to set up and maintain the necessary infrastructure to stream the data into Amazon S3, and then query the data using Athena.
Using Amazon Kinesis Data Analytics, as mentioned in option B, would be a better choice for running one-time queries on streaming data, as it is
specifically designed to process data in real-time and can automatically scale to match the incoming data rate.
upvoted 2 times
5 months, 3 weeks ago
"Company needs to consolidate all the data into one place" -> S3 bucket, which is happening in E, which means Athena would not have an
issue, so A is ok.
upvoted 2 times
5 months ago
Absolutely, querying data is after staging and so Athena fits perfectly.
upvoted 1 times
6 months ago
Selected Answer: AE
C can work it out ,but has additional overhead.
upvoted 2 times
6 months, 1 week ago
Selected Answer: AE
A and E
upvoted 2 times
6 months, 3 weeks ago
Selected Answer: AC
I would go for AE as information needs to be stored in S3
upvoted 1 times
6 months, 3 weeks ago
Anser is AE : https://aws.amazon.com/blogs/big-data/enhance-analytics-with-google-trends-data-using-aws-glue-amazon-athena-and-amazon-
quicksight/
upvoted 2 times
Topic 1
Question #157
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data
after 5 years. The company also must inde nitely keep audit logs of actions that are performed within the database. Currently, the company has
automated backups con gured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Take a manual snapshot of the DB cluster.
B. Create a lifecycle policy for the automated backups.
C. Con gure automated backup retention for 5 years.
D. Con gure an Amazon CloudWatch Logs export for the DB cluster.
E. Use AWS Backup to take the backups and to keep the backups for 5 years.
Correct Answer:
BE
Highly Voted
5 months, 3 weeks ago
I tend to agree D and E...
A - Manual task that can be automated, so why make life difficult?
B - The maximum retention period is 35 days, so would not help
C - The maximum retention period is 35 days, so would not help
D - Only option that deals with logs, so makes sense
E - Partially manual but only option that achieves the 5 year goal
upvoted 14 times
Highly Voted
7 months ago
Selected Answer: DE
dude trust me
upvoted 12 times
5 months, 3 weeks ago
No, please show your reasoning, you may be wrong. Remember, no one thinks they are wrong, but some always are :)
upvoted 9 times
Most Recent
3 months, 1 week ago
Selected Answer: AD
Automated backup is limited 35 days
upvoted 1 times
5 months ago
Selected Answer: DE
Previously, you had to create custom scripts to automate backup scheduling, enforce retention policies, or consolidate backup activity for manual
Aurora cluster snapshots, especially when coordinating backups across AWS services. With AWS Backup, you gain a fully managed, policy-based
backup solution with snapshot scheduling and snapshot retention management. You can now create, manage, and restore Aurora backups directly
from the AWS Backup console for both PostgreSQL-compatible and MySQL-compatible versions of Aurora.
To get started, select an Amazon Aurora cluster from the AWS Backup console and take an on-demand backup or simply assign the cluster to a
backup plan.
upvoted 4 times
5 months ago
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls
upvoted 2 times
5 months, 4 weeks ago
Selected Answer: DE
A is not a valid option for meeting the requirements. A manual snapshot of the DB cluster is a point-in-time copy of the data in the cluster. While
taking manual snapshots can be useful for creating backups of the data, it is not a reliable or efficient way to meet the requirement of storing all
the data for 5 years and deleting it after 5 years. It would be difficult to ensure that manual snapshots are taken regularly and retained for the
required period of time. It is recommended to use a fully managed backup service like AWS Backup, which can automate and centralize the process
of taking and retaining backups.
upvoted 3 times
Community vote distribution
DE (81%)
AD (19%)
5 months, 4 weeks ago
Sorry, B and E that correct
B. Create a lifecycle policy for the automated backups.
This would ensure that the backups taken using AWS Backup are retained for the desired period of time.
upvoted 1 times
5 months, 3 weeks ago
I think a lifecycle policy would only keep backups for 35 days
upvoted 3 times
6 months ago
Selected Answer: DE
D and E only
upvoted 2 times
6 months ago
AD
is correct as you can keep backup of snapshot indifferently.
upvoted 1 times
6 months, 1 week ago
Selected Answer: DE
D and E
upvoted 2 times
6 months, 2 weeks ago
Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period. No performance
impact or interruption of database service occurs as backup data is being written. You can specify a backup retention period, from 1 to 35 days,
when you create or modify a DB cluster.
If you want to retain a backup beyond the backup retention period, you can also take a snapshot of the data in your cluster volume. Because
Aurora retains incremental restore data for the entire backup retention period, you only need to create a snapshot for data that you want to retain
beyond the backup retention period. You can create a new DB cluster from the snapshot.
upvoted 3 times
6 months, 2 weeks ago
Selected Answer: DE
D is the only one that resolves the logging situation
"automated backup" = AWS Backup
https://aws.amazon.com/backup/faqs/?nc=sn&loc=6
AWS Backup provides a centralized console, automated backup scheduling, backup retention management, and backup monitoring and alerting.
AWS Backup offers advanced features such as lifecycle policies to transition backups to a low-cost storage tier. It also includes backup storage and
encryption independent from its source data, audit and compliance reporting capabilities with AWS Backup Audit Manager, and delete protection
with AWS Backup Vault Lock.
upvoted 2 times
6 months, 2 weeks ago
AD
Reason: When creating Aurora back up, you will need to specify the retention period which is between 1-35days. This does not meet the 5years
retention requirement in this case.
Hence taking a snap manual snap shot is the best solution.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 2 times
6 months, 3 weeks ago
Selected Answer: AD
no more than 35 days
upvoted 4 times
6 months, 2 weeks ago
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls AWS Backup
upvoted 3 times
6 months, 3 weeks ago
We all are agree with letter D but based in this documentation I think A could be the other correct answer:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
But if I wrong, let me know, please :)
upvoted 3 times
6 months, 2 weeks ago
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls AWS Backup
upvoted 1 times
7 months ago
Selected Answer: DE
DE Option
upvoted 3 times
7 months ago
Selected Answer: DE
D and E is the most sensible options here.
upvoted 3 times
7 months ago
Selected Answer: DE
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?nc1=h_ls
AWS Backup adds Amazon Aurora database cluster snapshots as its latest protected resource
upvoted 6 times
7 months ago
Selected Answer: DE
There is no sense with A if you can use AWS backup and keep snapshot for 5 years.
upvoted 4 times
5 months, 3 weeks ago
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-be-managed-via-aws-backup/?
nc1=h_ls%20AWS%20Backup
upvoted 1 times
6 months, 2 weeks ago
But the retention period is between 1-35 went creating Aurora backup using AWS backup.
upvoted 1 times
Topic 1
Question #158
A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then
will be available on demand. The event is expected to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand streaming?
A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route 53
D. Amazon S3 Transfer Acceleration
Correct Answer:
A
Highly Voted
7 months, 2 weeks ago
A is right
You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that
specifically require static IP addresses
upvoted 19 times
Most Recent
4 days, 12 hours ago
Amazon CloudFront is a content delivery network (CDN) that can deliver both real-time and on-demand streaming. It caches content at edge
locations worldwide, providing low-latency delivery to a global audience.
B. AWS Global Accelerator: Global Accelerator is more suitable for non-HTTP use cases or when static IP addresses are required.
C. Amazon Route 53: Route 53 is a DNS service and not designed specifically for streaming video.
D. Amazon S3 Transfer Acceleration: S3 Transfer Acceleration improves upload speeds to Amazon S3 but does not directly enhance streaming
performance.
upvoted 1 times
1 month ago
Selected Answer: A
Serve video on demand or live streaming video
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.
For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth
Streaming, and CMAF, to any device.
For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the
fragments in the right order can be combined, to reduce the load on your origin server.
upvoted 1 times
1 month ago
Selected Answer: B
I vote for B. Global Accelerator.
CloudFront Video on Demand is specifically designed for delivering on-demand video content, meaning pre-recorded videos that can be streamed
or downloaded. It is not suitable for streaming real-time videos or live video broadcasts.
Global Accelerator help in reducing network hops between the user and AWS making real-time streams smoother.
upvoted 3 times
1 month ago
Selected Answer: B
To get the benefit of CloudFront video needs to be cached, so requests should be frequent. On demand video - I vote for B
upvoted 1 times
3 months, 1 week ago
How can Cloudfront help with real-time use case?
upvoted 2 times
5 months, 2 weeks ago
Amazon CloudFront
upvoted 1 times
Community vote distribution
A (64%)
B (36%)
5 months, 3 weeks ago
Selected Answer: A
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html#IntroductionUseCasesStreaming
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
A Cloudfront
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
Cloudfront is used for live streaming and video on-demand
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html
upvoted 1 times
7 months ago
Selected Answer: A
I thought the real-time streaming comes with rtsp protocol for which B is better.
But I realized now real-time streaming also has http way now (like HLS, etc.).
So the answer should be A.
upvoted 2 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
CloudFront for sure
upvoted 1 times
Topic 1
Question #159
A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application’s tra c
recently spiked due to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Choose two.)
A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to lter them out.
D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint.
E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call.
Correct Answer:
CD
2 weeks, 6 days ago
Selected Answer: AC
If you're wondering why A. It's because you can configure usage plans and API keys to allow customers to access selected APIs, and begin
throttling requests to those APIs based on defined limits and quotas. As for C. It's because AWS WAF has bot detection capabilities.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: CE
C) WAF has bot identification and remedial tools, so it's CORRECT.
A) remember the question : "...block requests from unauthorized users?" -- an api key is involved in a authorization process. It's not the more
secure process, but it's better than an totoally anonymous process. If you don't know the key, you can't authenticate. So the bots, at least the first
days/weeks could not access the service (at the end they'll do, cos' the key will be spread informally). So it's CORRECT.
B) Implement a logic in the Lambda to detect fraudulent ip's is almost impossible, cos' it's a dynamic and changing pattern that you cannot handle
easily.
D) creating a rol is not going to imply be more protected from unauth. request, because a rol is a "principal", it's not involved in the authorization
process.
upvoted 3 times
3 months, 3 weeks ago
It should be A and C
But API Key alone can not help
API keys are alphanumeric string values that you distribute to application developer customers to grant access to your API. You can use API keys
together with Lambda authorizers, IAM roles, or Amazon Cognito to control access to your APIs.
upvoted 1 times
4 months ago
Selected Answer: CE
Here https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html it says this:
Don't use API keys for authentication or authorization for your APIs. If you have multiple APIs in a usage plan, a user with a valid API key for one
API in that usage plan can access all APIs in that usage plan. Instead, use an IAM role, a Lambda authorizer, or an Amazon Cognito user pool.
API keys are intended for software developers wanting to access an API from their application. This link then goes on to say an IAM role should be
used instead.
upvoted 1 times
4 months ago
Nevermind my answer. I switch it to A/C because the question states the application is *using* the API Gateway so A will make sense
upvoted 1 times
5 months, 1 week ago
Selected Answer: AC
A/C for security to prevent anonymous access
upvoted 3 times
Community vote distribution
AC (79%)
CE (21%)
5 months, 3 weeks ago
I'm thinking A and C
A - the API is publicly accessible but there is nothing to stop the company requiring users to register for access.
B - you can do this with Lambda, AWS Network Firewall and Amazon GuardDuty, see https://aws.amazon.com/blogs/security/automatically-block-
suspicious-traffic-with-aws-network-firewall-and-amazon-guardduty/, but these components are not mentioned
C - a WAF is the logical choice with it's bot detection capabilities
D - a private API is only accessible within a VPC, so this would not work
E - would be even more work than A
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: AC
https://www.examtopics.com/discussions/amazon/view/61082-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
6 months ago
Selected Answer: AC
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 2 times
6 months ago
I do not agree with A as it mentioned the application is publically accessible. "A company is running a publicly accessible serverless application that
uses Amazon API Gateway and AWS Lambda". If this is public how can we ensure that genuine user?
I will go with CD
upvoted 3 times
6 months ago
Selected Answer: AC
A and C ,C is obivious ,however A is the only other which seems to put quota API keys are alphanumeric string values that you distribute to
application developer customers to grant access to your API. You can use API keys together with Lambda authorizers, IAM roles, or Amazon
Cognito to control access to your APIs
upvoted 1 times
6 months, 1 week ago
Selected Answer: AC
A and C
upvoted 1 times
7 months ago
Selected Answer: AC
A and C are the correct choices.
upvoted 1 times
7 months ago
Selected Answer: AC
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
upvoted 1 times
7 months ago
Only answer C is an obviouis choice. B and D are clearly not right and A is the only remotely viable other answer but even then the documentation
on API Keys and Usage quotas states not to rely on it to block API requests;
Usage plan throttling and quotas are not hard limits, and are applied on a best-effort basis. In some cases, clients can exceed the quotas that you
set. Don’t rely on usage plan quotas or throttling to control costs or block access to an API. Consider using AWS Budgets to monitor costs and AWS
WAF to manage API requests.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
upvoted 3 times
7 months ago
Selected Answer: AC
A and C
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: AC
use usage plan API key
upvoted 2 times
7 months, 2 weeks ago
A and C
upvoted 3 times
Topic 1
Question #160
An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data
is stored in JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds
if it is needed, and the data must be kept for 30 days.
Which solution meets these requirements MOST cost-effectively?
A. Amazon OpenSearch Service (Amazon Elasticsearch Service)
B. Amazon S3 Glacier
C. Amazon S3 Standard
D. Amazon RDS for PostgreSQL
Correct Answer:
C
Highly Voted
7 months, 2 weeks ago
Selected Answer: C
Ans C:
Cost-effective solution with milliseconds of retrieval -> it should be s3 standard
upvoted 7 times
Most Recent
4 days, 12 hours ago
S3 Standard is a highly durable and scalable storage option suitable for backup and disaster recovery purposes. It offers millisecond access to data
when needed and provides durability guarantees. It is also cost-effective compared to other storage options like OpenSearch Service, S3 Glacier,
and RDS for PostgreSQL, which may have higher costs or longer access times for retrieving the data.
A. OpenSearch Service (Elasticsearch Service): While it offers fast data retrieval, it may incur higher costs compared to storing data directly in S3,
especially considering the amount of data being generated.
B. S3 Glacier: While it provides long-term archival storage at a lower cost, it does not meet the requirement of immediate access in milliseconds.
Retrieving data from Glacier typically takes several hours.
D. RDS for PostgreSQL: While it can be used for data storage, it may be overkill and more expensive for a backup and disaster recovery solution
compared to S3 Standard, which is more suitable and cost-effective for storing and retrieving data.
upvoted 1 times
2 weeks, 4 days ago
Selected Answer: B
https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/
upvoted 2 times
4 months, 2 weeks ago
A. Incorrect
Amazon OpenSearch Service (Amazon Elasticsearch Service) is designed for full-text search and analytics, but it may not be the most cost-effective
solution for this use case
B. Incorrect
S3 Glacier is a cold storage solution that is designed for long-term data retention and infrequent access.
C. Correct
S3 standard is cost-effective and meets the requirement. S3 Standard allows for data retention for a specific number of days.
D. PostgreSQL is a relational database service and may not be the most cost-effective solution.
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: B
S3 Glacier Instant Retrieval – Use for archiving data that is rarely accessed and requires milliseconds retrieval.
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
upvoted 3 times
6 months, 1 week ago
Selected Answer: C
Option C
upvoted 1 times
6 months, 2 weeks ago
Community vote distribution
C (87%)
13%
Selected Answer: C
JSON is object notation. S3 stores objects.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
c IS correct
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
IMHO
Normally ElasticSearch would be ideal here, however as question states "Most cost-effective"
S3 is the best choice in this case
upvoted 3 times
6 months, 4 weeks ago
ElasticSearch is a search service, the question states here about the backup service reqd. for the DR scenario.
upvoted 2 times
Topic 1
Question #161
A company has a small Python application that processes JSON documents and outputs the results to an on-premises SQL database. The
application runs thousands of times each day. The company wants to move the application to the AWS Cloud. The company needs a highly
available solution that maximizes scalability and minimizes operational overhead.
Which solution will meet these requirements?
A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents.
Store the results in an Amazon Aurora DB cluster.
B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents
as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.
C. Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume
to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS
DB instance.
D. Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages. Deploy the Python code as a container
on an Amazon Elastic Container Service (Amazon ECS) cluster that is con gured with the Amazon EC2 launch type. Use the container to
process the SQS messages. Store the results on an Amazon RDS DB instance.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
Selected Answer: B
solution should remove operation overhead -> s3 -> lambda -> aurora
upvoted 9 times
1 week, 4 days ago
Aurora supports mysql and postgresql but question has database sql server. So, that eliminates B. So, the other logical answer is D. IMHO. Btw, i
also thought the answer is B and started re-reading question carefully.
upvoted 1 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: B
By placing the JSON documents in an S3 bucket, the documents will be stored in a highly durable and scalable object storage service. The use of
AWS Lambda allows the company to run their Python code to process the documents as they arrive in the S3 bucket without having to worry about
the underlying infrastructure. This also allows for horizontal scalability, as AWS Lambda will automatically scale the number of instances of the
function based on the incoming rate of requests. The results can be stored in an Amazon Aurora DB cluster, which is a fully-managed, high-
performance database service that is compatible with MySQL and PostgreSQL. This will provide the necessary durability and scalability for the
results of the processing.
upvoted 7 times
Most Recent
2 days, 18 hours ago
Selected Answer: B
Using Lambda eliminates the need to manage and provision servers, ensuring scalability and minimizing operational overhead. S3 provides durable
and highly available storage for the JSON documents. Lambda can be triggered automatically whenever new documents are added to the S3
bucket, allowing for real-time processing. Storing the results in an Aurora DB cluster ensures high availability and scalability for the processed data.
This solution leverages serverless architecture, allowing for automatic scaling and high availability without the need for managing infrastructure,
making it the most suitable choice.
A. This option requires manual management and scaling of EC2 instances, resulting in higher operational overhead and complexity.
C. This approach still involves manual management and scaling of EC2 instances, increasing operational complexity and overhead.
D. This solution requires managing and scaling an ECS cluster, adding operational overhead and complexity. Utilizing SQS adds complexity to the
system, requiring custom handling of message consumption and processing in the Python code.
upvoted 1 times
1 month ago
Selected Answer: B
Keywords here are : "maximizes scalability and minimizes operational overhead, hence option B is correct answer.
upvoted 1 times
Community vote distribution
B (92%)
8%
2 months, 2 weeks ago
Selected Answer: D
i vote for D as 'on-premises SQL database' is not mysql/postgre which can replace by aurora
upvoted 2 times
4 months ago
does somebody had contributor access and want to share. i would really appreciate it.
here's my email
367501tab@gmail.com
Thanks
upvoted 1 times
4 months ago
B is the best option. https://aws.amazon.com/rds/aurora/
upvoted 1 times
6 months ago
Selected Answer: B
agree...B is the best option S3, Lambda , Aurora.
upvoted 1 times
6 months ago
Selected Answer: B
Choosing B as "The company needs a highly available solution that maximizes scalability and minimizes operational overhead"
upvoted 1 times
6 months, 1 week ago
B is tempting but this sentence "runs thousands of times each day." If we use lambda as in B, won't this incur a high bill at the end?
upvoted 1 times
6 months ago
Agree,but question doesnt have Cost as criteria to choose solution, Criteria is "The company needs a highly available solution that maximizes
scalability and minimizes operational overhead". Hence B
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
7 months ago
Selected Answer: B
D is incorrect because using ECS entails a lot of admin overhead. so B is the correct one.
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: B
B is the answer
https://aws.amazon.com/rds/aurora/
upvoted 1 times
7 months, 2 weeks ago
D is correct option
upvoted 1 times
7 months, 1 week ago
ehhhhhh
upvoted 4 times
Topic 1
Question #162
A company wants to use high performance computing (HPC) infrastructure on AWS for nancial risk modeling. The company’s HPC workloads run
on Linux. Each HPC work ow runs on hundreds of Amazon EC2 Spot Instances, is short-lived, and generates thousands of output les that are
ultimately stored in persistent storage for analytics and long-term future use.
The company seeks a cloud storage solution that permits the copying of on-premises data to long-term persistent storage to make data available
for processing by all EC2 instances. The solution should also be a high performance le system that is integrated with persistent storage to read
and write datasets and output les.
Which combination of AWS services meets these requirements?
A. Amazon FSx for Lustre integrated with Amazon S3
B. Amazon FSx for Windows File Server integrated with Amazon S3
C. Amazon S3 Glacier integrated with Amazon Elastic Block Store (Amazon EBS)
D. Amazon S3 bucket with a VPC endpoint integrated with an Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) volume
Correct Answer:
A
Highly Voted
6 months, 2 weeks ago
Selected Answer: A
If you see HPC and Linux both in the question.. Pick Amazon FSx for Lustre
upvoted 13 times
5 months, 3 weeks ago
yeap, you’re right!
upvoted 1 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: A
Additional keywords: make data available for processing by all EC2 instances ==> FSx
In absence of EFS, it should be FSx. Amazon FSx For Lustre provides a high-performance, parallel file system for hot data
upvoted 5 times
Most Recent
2 days, 18 hours ago
Selected Answer: A
FSx for Lustre is a high-performance file system optimized for compute-intensive workloads. It provides scalable, parallel access to data and is
suitable for HPC applications.
By integrating FSx for Lustre with S3, you can easily copy on-premises data to long-term persistent storage in S3, making it available for processing
by EC2 instances.
S3 serves as the durable and highly scalable object storage for storing the output files, allowing for analytics and long-term future use.
Option B, FSx for Windows File Server, is not suitable because the workloads run on Linux, and this option is designed for Windows file sharing.
Option C, S3 Glacier integrated with EBS, is not the best choice as it is a low-cost archival storage service and not optimized for high-performance
file system requirements.
Option D, using an S3 bucket with a VPC endpoint integrated with an Amazon EBS General Purpose SSD (gp2) volume, does not provide the
required high-performance file system capabilities for HPC workloads.
upvoted 1 times
1 month ago
Selected Answer: A
Option A is right answer.
upvoted 1 times
4 months ago
FSx for Lustre makes it easy and cost-effective to launch and run the popular, high-performance Lustre file system. You use Lustre for workloads
where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling.
Amazon Fsx for Lustre is integrated with Amazon S3.
upvoted 2 times
Community vote distribution
A (100%)
5 months, 3 weeks ago
Selected Answer: A
Amazon FSx for Lustre integrated with Amazon S3
upvoted 1 times
6 months ago
Selected Answer: A
A is right choice here.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A is the best high performance storage with integration to S3
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
requirement is File System and workload running on linux. so S3 and FSx for windows is not an option
upvoted 1 times
6 months, 2 weeks ago
A
The Amazon FSx for Lustre service is a fully managed, high-performance file system that makes it easy to move and process large amounts of data
quickly and cost-effectively. It provides a fully managed, cloud-native file system with low operational overhead, designed for massively parallel
processing and high-performance workloads. The Lustre file system is a popular, open source parallel file system that is well-suited for a variety of
applications such as HPC, image processing, AI/ML, media processing, data analytics, and financial modeling, among others. With Amazon FSx for
Lustre, you can quickly create and configure new file systems in minutes, and easily scale the size of your file system up or down
upvoted 2 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 2 weeks ago
A - for HPC "Amazon FSx for Lustre" and long-term persistence "S3"
upvoted 1 times
7 months, 2 weeks ago
Amazon FSx for Lustre:
• HPC optimized distributed file system, millions of IOPS
• Backed by S3
upvoted 3 times
7 months, 2 weeks ago
Answer A
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
FxS Lustre integrated with S3
upvoted 1 times
Topic 1
Question #163
A company is building a containerized application on premises and decides to move the application to AWS. The application will have thousands
of users soon after it is deployed. The company is unsure how to manage the deployment of containers at scale. The company needs to deploy
the containerized application in a highly available architecture that minimizes operational overhead.
Which solution will meet these requirements?
A. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the AWS Fargate launch type to run the containers. Use target tracking to scale automatically based on demand.
B. Store container images in an Amazon Elastic Container Registry (Amazon ECR) repository. Use an Amazon Elastic Container Service
(Amazon ECS) cluster with the Amazon EC2 launch type to run the containers. Use target tracking to scale automatically based on demand.
C. Store container images in a repository that runs on an Amazon EC2 instance. Run the containers on EC2 instances that are spread across
multiple Availability Zones. Monitor the average CPU utilization in Amazon CloudWatch. Launch new EC2 instances as needed.
D. Create an Amazon EC2 Amazon Machine Image (AMI) that contains the container image. Launch EC2 instances in an Auto Scaling group
across multiple Availability Zones. Use an Amazon CloudWatch alarm to scale out EC2 instances when the average CPU utilization threshold
is breached.
Correct Answer:
C
Highly Voted
7 months, 2 weeks ago
Selected Answer: A
AWS Fargate
upvoted 9 times
Most Recent
2 days, 18 hours ago
Selected Answer: A
ECR provides a secure and scalable repository to store and manage container images. ECS with the Fargate launch type allows you to run
containers without managing the underlying infrastructure, providing a serverless experience. Target tracking in ECS can automatically scale the
number of tasks or services based on a target value such as CPU or memory utilization, ensuring that the application can handle increasing
demand without manual intervention.
Option B is not the best choice because using the EC2 launch type requires managing and scaling EC2 instances, which increases operational
overhead.
Option C is not the optimal solution as it involves managing the container repository on an EC2 instance and manually launching EC2 instances,
which adds complexity and operational overhead.
Option D also requires managing EC2 instances, configuring ASGs, and setting up manual scaling rules based on CloudWatch alarms, which is not
as efficient or scalable as using Fargate in combination with ECS.
upvoted 1 times
1 month ago
Selected Answer: A
ECS + Fargate satisfy requirements, hence option A is the best solution.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
minimize operational overhead = Serverless
Fargate is Serverless
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
Correct is "A"
upvoted 1 times
2 months, 3 weeks ago
You can place Fargate launch type all in one AZ, or across multiple AZs.But Option A does not take care of High Availability requirement of
question. With Option C we have multi AZ.
upvoted 1 times
Community vote distribution
A (100%)
2 months, 3 weeks ago
Selected Answer: A
A
Why ?
Because fargate provisioned on demand resource
upvoted 2 times
6 months ago
Option A
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2
instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need
to choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
"minimizes operational overhead" --> Fargate is serverless
upvoted 2 times
6 months, 2 weeks ago
A
AWS Fargate is a serverless experience for user applications, allowing the user to concentrate on building applications instead of configuring and
managing servers. Fargate also automates resource management, allowing users to easily scale their applications in response to demand.
upvoted 1 times
7 months ago
Selected Answer: A
Fargate is the only serverless option.
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: A
AWS Fargate
upvoted 2 times
7 months, 2 weeks ago
I think A is the correct option. AWS Farget
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
A seems right
upvoted 4 times
Topic 1
Question #164
A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended
to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The
sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed: If the messages fail to
process, they must be retained so that they do not impact the processing of any remaining messages.
Which solution meets these requirements and is the MOST operationally e cient?
A. Set up an Amazon EC2 instance running a Redis database. Con gure both applications to use the instance. Store, process, and delete the
messages, respectively.
B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the
Kinesis Client Library (KCL).
C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Con gure a dead-letter queue
to collect the messages that failed to process.
D. Subscribe the processing application to an Amazon Simple Noti cation Service (Amazon SNS) topic to receive noti cations to process.
Integrate the sender application to write to the SNS topic.
Correct Answer:
С
Highly Voted
5 months, 3 weeks ago
Selected Answer: C
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed)
successfully.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 5 times
Most Recent
2 days, 18 hours ago
Selected Answer: C
By integrating both the sender and processor applications with an SQS, messages can be reliably sent from the sender to the processor application
for processing. SQS provides at-least-once delivery, ensuring that messages are not lost in transit. If a message fails to process, it can be retained in
the queue and retried without impacting the processing of other messages. Configuring a DLQ allows for the collection of messages that
repeatedly fail to process, providing visibility into failed messages for troubleshooting and analysis.
A is not the optimal choice as it involves managing and configuring an EC2 instance running a Redis, which adds operational overhead and
maintenance requirements.
B is not the most operationally efficient solution as it introduces additional complexity by using Amazon Kinesis data streams and integrating with
the Kinesis Client Library for message processing.
D, using SNS, is not the best fit for the scenario as it is more suitable for pub/sub messaging and broadcasting notifications rather than the specific
requirement of message processing between two applications.
upvoted 1 times
1 month ago
Selected Answer: C
Answer C, In Question if Keyword have Processing Failed >> SQS
upvoted 1 times
1 month ago
Selected Answer: C
solution that meets these requirements and is the MOST operationally efficient will be option C. SQS is buffer between 2 APPs.
upvoted 1 times
1 month ago
The visibility timeout must not be more than 12 hours. ( For SQS )
Jobs may take 2 days to process
upvoted 1 times
1 month, 2 weeks ago
Community vote distribution
C (91%)
9%
Selected Answer: C
operationally efficient = Serverless
SQS is serverless
upvoted 1 times
1 month, 2 weeks ago
SNS too is serverless, but it is obvious that it is not the correct answer in this case
upvoted 1 times
2 months ago
Selected Answer: C
more realistic option is C.
only problem with this is the limit of the visibility timeout is 12H max. as the second application take 2 days to process, there will be a duplicate of
processing messages in the queue. this might complicate things.
upvoted 2 times
3 months, 4 weeks ago
SQS has a limit 12h for visibility time out
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
Option C, using Amazon SQS, is a valid solution that meets the requirements of the company. However, it may not be the most operationally
efficient solution because SQS is a managed message queue service that requires additional operational overhead to handle the retention of
messages that failed to process. Option B, using Amazon Kinesis Data Streams, is more operationally efficient for this use case because it can
handle the retention of messages that failed to process automatically and provides the ability to process and analyze streaming data in real-time.
upvoted 1 times
4 months ago
Kinesis stream save data for up to 24 hours, doesn't meet the 2 day requirement.
Kinesis streams don't have fail-safe for failed processing, unlike SQS.
The correct answer is C - SQS.
upvoted 3 times
2 months ago
this is not a correct statement.
A data stream is a logical grouping of shards. There are no bounds on the number of shards within a data stream (request a limit increase if
you need more). A data stream will retain data for 24 hours by default, or optionally up to 365 days.
Shard
https://aws.amazon.com/kinesis/data-streams/getting-started/
upvoted 1 times
4 months, 3 weeks ago
There's no way for kinesis to know whether the message processing failed.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C.
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
This matches mostly the job of Dead Letter Q:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
vs
https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 4 times
7 months ago
Selected Answer: C
Option C is the correct ans
upvoted 1 times
7 months ago
Selected Answer: C
C is correct. The B is wrong because the question ask for a way to let the two application to comunicate, so che process is already done
upvoted 1 times
7 months ago
Selected Answer: B
Please explain by "B" is incorrect? How does SQS process data?
"KCL helps you consume and process data from a Kinesis data stream by taking care of many of the complex tasks associated with distributed
computing."
https://docs.aws.amazon.com/streams/latest/dev/shared-throughput-kcl-consumers.html
upvoted 1 times
5 months, 3 weeks ago
As per question, the processing application will take messages.
"The company wants to implement an AWS service to handle messages between the two applications."
upvoted 1 times
6 months, 3 weeks ago
The processing is done at the 2nd application level.
This seems to be the job of Dead Letter Q
upvoted 1 times
7 months ago
Kinesis may not be having message retry - there is no way for kinesis to know whether the message processing failed. message can be there till
their retention period.
upvoted 4 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
upvoted 2 times
7 months, 2 weeks ago
Option: C
"Amazon FSx for Lustre" ---> Dead Letter Queue
upvoted 1 times
Topic 1
Question #165
A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company’s
security policy requires that all website tra c be inspected by AWS WAF.
How should the solutions architect comply with these requirements?
A. Con gure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.
B. Con gure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
C. Con gure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.
D. Con gure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF
on the distribution.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
Answer D. Use an OAI to lockdown CloudFront to S3 origin & enable WAF on CF distribution
upvoted 16 times
6 months ago
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/ confirms use of OAI (and option D).
upvoted 4 times
Most Recent
15 hours, 24 minutes ago
Selected Answer: B
I vote for B!
Option D is not correct, OAI in CloudFront and restricting access to the S3 bucket does not ensure that all website traffic is inspected by AWS WAF.
upvoted 1 times
2 days, 17 hours ago
Selected Answer: B
By configuring CloudFront to forward all incoming requests to AWS WAF, the traffic will be inspected by AWS WAF before reaching the S3 origin,
complying with the security policy requirement. This approach ensures that all website traffic is inspected by AWS WAF, providing an additional
layer of security before accessing the content stored in the S3 origin.
Option A is not the correct choice as configuring an S3 bucket policy to accept requests from the AWS WAF ARN only would bypass the inspection
of traffic by AWS WAF. It does not ensure that all website traffic is inspected.
Option C is not the optimal solution as it focuses on controlling access to S3 using a security group. Although it associates AWS WAF with
CloudFront, it does not guarantee that all incoming requests are inspected by AWS WAF.
Option D is not the recommended solution as configuring an OAI in CloudFront and restricting access to the S3 bucket does not ensure that all
website traffic is inspected by AWS WAF. The OAI is used for restricting direct access to S3 content, but the traffic should still pass through AWS
WAF for inspection.
upvoted 1 times
1 week, 5 days ago
Answer B:
If your origin is an Amazon S3 bucket configured as a website endpoint, you must set it up with CloudFront as a custom origin. That means you
can't use OAC (or OAI). However, you can restrict access to a custom origin by setting up custom headers and configuring the origin to require
them. For more information, see Restricting access to files on custom origins.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 1 times
5 days, 21 hours ago
wtf dot com?
upvoted 1 times
1 month ago
Selected Answer: D
the solutions architect that comply with these requirements is option D.
CF+ S3= OAI/ OAC are the best solutions.
upvoted 1 times
Community vote distribution
B (50%)
D (50%)
2 months ago
Selected Answer: D
Use an OAI to have access only from CloudFront to S3 origin & enable WAF on CF distribution
upvoted 1 times
2 months, 1 week ago
Selected Answer: B
I'm voting B because the traffic flows from the user to CloudFront, then from CloudFront to AWS WAF, and then back to CloudFront before being
sent to the S3 origin.
Regarding answer D, from what I can tell when you use OAI (or OAC) you don't use WAF, and the question specifically asks for us to use WAF.
upvoted 1 times
2 months ago
actually speaking; you are able to enable WAF in cloudfront. there is nothing called fording.
upvoted 1 times
2 months, 1 week ago
B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
Option B is the best solution for enforcing AWS WAF protection for a static website hosted on Amazon S3 through Amazon CloudFront. This
involves configuring Amazon CloudFront to forward incoming requests to AWS WAF before requesting content from the S3 origin to ensure that all
website traffic is inspected by AWS WAF.
upvoted 1 times
2 months, 2 weeks ago
ANSWER- B :CloudFront provides two ways to send authenticated requests to an Amazon S3 origin: origin access control (OAC) and origin access
identity (OAI). We recommend using OAC because it supports:
All Amazon S3 buckets in all AWS Regions, including opt-in Regions launched after December 2022
Amazon S3 server-side encryption with AWS KMS (SSE-KMS)
Dynamic requests (PUT and DELETE) to Amazon S3
OAI doesn't work for the scenarios in the preceding list, or it requires extra workarounds in those scenarios.
upvoted 1 times
3 months ago
To option B, If OAI is not used, how about the direct traffic to S3 be inspect by WAF?
upvoted 1 times
3 months, 1 week ago
Selected Answer: B
D. Is wrong because "..specifically, OAI doesn't support:
Amazon S3 buckets in all AWS Regions, including opt-in Regions"
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
upvoted 2 times
3 months, 3 weeks ago
According to chat gpt
To comply with the security policy that requires all website traffic to be inspected by AWS WAF, the solutions architect should configure Amazon
CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin. Therefore, option B is the correct answer.
Option A is not sufficient because it only restricts access to the S3 bucket, but it does not ensure that all website traffic is inspected by AWS WAF.
Option C is also not sufficient because it only allows Amazon CloudFront IP addresses to access Amazon S3, but it does not ensure that all website
traffic is inspected by AWS WAF.
Option D is partially correct because it uses an origin access identity (OAI) to restrict access to the S3 bucket, but it does not mention configuring
Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin. Therefore, it is not the best
answer.
upvoted 3 times
3 months, 3 weeks ago
Selected Answer: D
With option B, the question is if the WAF can be intergrated with the S3?
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: D
The Answer is D.
upvoted 1 times
4 months, 1 week ago
it should be D. refer at section "Securing Your Content"
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/
upvoted 2 times
4 months, 3 weeks ago
For people who chose B as the right Answer, look at this link : https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html
"When you create a web ACL, you can specify one or more CloudFront distributions that you want AWS WAF to inspect. AWS WAF starts to inspect
and manage web requests for those distributions based on the criteria that you identify in the web ACL"
You don't configure Cloudfront to redirect traffic to WAF. You just create an ACL and points to the Cloudfront distribution.
So D is the best solution to secure and integrate Cloudfront with S3 and WAF.
From one side it protects your S3 Content by allowing user requests to access only the OAI.
And from other side it enable WAF to control traffic before reaching Cloudfront by creating a WAF Rule or ACL (Not redirecting Cloudfront traffic
to WAF which as a solution architect you cannot do)
upvoted 4 times
4 months, 3 weeks ago
Selected Answer: B
explicitly explains the rationale for war forwarding-- new feature
https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-manager/
upvoted 2 times
Topic 1
Question #166
Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from
users around the world. The les are stored in an Amazon S3 bucket. A solutions architect has been asked to design an e cient and effective
solution.
Which action should the solutions architect take to accomplish this?
A. Generate presigned URLs for the les.
B. Use cross-Region replication to all Regions.
C. Use the geoproximity feature of Amazon Route 53.
D. Use Amazon CloudFront with the S3 bucket as its origin.
Correct Answer:
D
Highly Voted
6 months, 1 week ago
Selected Answer: D
The most effective and efficient solution would be Option D (Use Amazon CloudFront with the S3 bucket as its origin.)
Amazon CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as HTML pages,
images, and videos. By using CloudFront, the HTML pages will be served to users from the edge location that is closest to them, resulting in faster
delivery and a better user experience. CloudFront can also handle the high traffic and large number of requests expected for the global event,
ensuring that the HTML pages are available and accessible to users around the world.
upvoted 6 times
Most Recent
2 days, 17 hours ago
Selected Answer: D
CloudFront is well-suited for efficiently serving static HTML pages to users around the world. By using itwith the S3 as its origin, the static HTML
pages can be cached and distributed globally to edge locations, reducing latency and improving performance for users accessing the pages from
different regions. This solution ensures efficient and effective delivery of the daily reports to millions of users worldwide, providing a scalable and
high-performance solution for the global event.
A would allow temporary access to the files, but it does not address the scalability and performance requirements of serving millions of views
globally.
B is not necessary for this scenario as the goal is to distribute the static HTML pages efficiently to users worldwide, not replicate the files across
multiple Regions.
C is primarily used for routing DNS traffic based on the geographic location of users, but it does not provide the caching and content delivery
capabilities required for this use case.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
Agreed
upvoted 1 times
6 months, 2 weeks ago
answer is D agree with Shasha1
upvoted 1 times
6 months, 2 weeks ago
D
CloudFront is a content delivery network (CDN) offered by Amazon Web Services (AWS). It functions as a reverse proxy service that caches web
content across AWS's global data centers, improving loading speeds and reducing the strain on origin servers. CloudFront can be used to efficiently
deliver large amounts of static or dynamic content anywhere in the world.
upvoted 2 times
7 months, 1 week ago
Community vote distribution
D (100%)
D is correct
upvoted 2 times
7 months, 2 weeks ago
D
Static content on S3 and hence Cloudfront is the best way
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: D
D is the correct answer
upvoted 2 times
Topic 1
Question #167
A company runs a production application on a eet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and
processes the messages in parallel. The message volume is unpredictable and often has intermittent tra c. This application should continually
process messages without any downtime.
Which solution meets these requirements MOST cost-effectively?
A. Use Spot Instances exclusively to handle the maximum capacity required.
B. Use Reserved Instances exclusively to handle the maximum capacity required.
C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
Correct Answer:
C
Highly Voted
7 months, 1 week ago
Selected Answer: D
D is the correct answer
upvoted 16 times
4 months, 2 weeks ago
C is correct, read for cost effectiveness
upvoted 3 times
3 months ago
if you cannot find enough spot instance you will have downtime
you cannot always find spot instance
upvoted 8 times
1 month ago
Why downtime when there are baseline reserved instances?
upvoted 1 times
Highly Voted
5 months, 3 weeks ago
Selected Answer: C
"without any downtime" - Reserved Instances for the baseline capacity
"MOST cost-effectively" - Spot Instances to handle additional capacity
upvoted 9 times
3 months ago
How can you have baseline capacity when your message volume is unpredictable and often has intermittent traffic?
upvoted 1 times
5 months, 1 week ago
Dude, read the question, cost consideration was not mentioned in the question.
upvoted 1 times
5 months, 1 week ago
Dude, read the question, "Which solution meets these requirements MOST cost-effectively?"
upvoted 15 times
2 months ago
cost-effectively means, Cheapest solution (cost) that achieve all the requirements (effectively). Its not cost-effectively if is just cheapest
solution that fail to address all the requirements, in this case. (This application should continually process messages without any
downtime) no matter the volume, since it is unpredictable. B for example, address the requirement but not the cheapest solution that
achieve it. D is the cheaper choice that address the requirement (without any downtime). and C is cheaper than D but do not garantee
that you wont have downtime since it is SPOT instances.
upvoted 3 times
2 months, 3 weeks ago
I am leaning towards C because the idea of having a queue is to decouple the processing. If an instance goes down(spot) while
processing will it not show up back after the visibility timeout? So using spot meets the cost-effective objective.
upvoted 4 times
Community vote distribution
D (53%)
C (46%)
Most Recent
2 days, 17 hours ago
Selected Answer: C
By reserving instances in advance, the company can benefit from discounted pricing compared to On-Demand instances. As the message volume is
unpredictable and intermittent, utilizing Spot Instances can provide additional capacity during peak periods at a much lower cost. By combining
both, the company can optimize costs while ensuring continuous processing of messages without downtime.
Option A (using Spot Instances exclusively) may result in interruptions or terminations when the Spot price exceeds the bid or there is a capacity
constraint, leading to potential downtime or message processing delays.
Option B (using Reserved Instances exclusively) may result in higher costs as Reserved Instances are more suitable for predictable or baseline
workloads, and the company may incur unused capacity during periods of low message volume.
Option D (using Reserved Instances for baseline capacity and On-Demand Instances for additional capacity) can be costlier compared to using Spot
Instances for additional capacity as On-Demand Instances do not offer the same level of cost savings as Spot Instances.
upvoted 1 times
1 week, 3 days ago
Selected Answer: C
c is correct based on cost-effectiveness
upvoted 1 times
1 week, 6 days ago
Selected Answer: D
D is correct.
upvoted 1 times
3 weeks, 6 days ago
Selected Answer: D
I'd say C if it was a Spot Fleet, not spot instances.
upvoted 1 times
1 month ago
Selected Answer: D
number of EC2 instances needs to scale with guarantee for high volume of traffic.
Spot instances dosen't give that guarantee therefore may cause downtime.
Hence, D not C.
upvoted 1 times
1 month ago
Selected Answer: D
D is correct
upvoted 1 times
1 month ago
Selected Answer: C
Downtime means a service is completely down.
Opting C will not cause downtime as we have baseline reserved instances. Performance may degrade if spot instances are not available.
But overall , spot instances are cost effective than on-demand instances and the requirement is satisfied.
upvoted 2 times
1 month ago
Selected Answer: C
Option C is MOST cost-effectively.
upvoted 1 times
1 month ago
Selected Answer: D
D is the answer. C is wrong because "the application should continually process messages without any downtime" ... you can't be certain to find
spot instances when you need them
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
without any downtime = NO Spot Instances
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: C
Key requirements:
- no downtime
- most cost-effective
Reserved instances for baseline capacity satisfy both requirements
Spot-instance satisfy cost-effectiveness.
Since we're not talking about performance AND we already have instances reserved D is not solving "no downtime" while loses to C in cost-
effectiveness
upvoted 2 times
2 months ago
Selected Answer: D
Most important phrase "without any downtime"- D
upvoted 3 times
2 months ago
Selected Answer: D
"process messages without any downtime" -> D.
Spot Instances are cost-effective, but they may produce downtime.
upvoted 3 times
2 months ago
Selected Answer: D
D is the correct answer. Keyword is without downtime, so spot instance is out of the question. D also addresses cost-effectiveness, by being more
cost-effective than B
upvoted 3 times
2 months ago
D - as you can get spot instance by provide marginal price and may also reprocess the queue msg when spot instance is not available. It would
server both - cost and avoid downtime
upvoted 1 times
Topic 1
Question #168
A security team wants to limit access to speci c services or actions in all of the team’s AWS accounts. All accounts belong to a large organization
in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained.
What should a solutions architect do to accomplish this?
A. Create an ACL to provide access to the services or actions.
B. Create a security group to allow accounts and attach it to user groups.
C. Create cross-account roles in each account to deny access to the services or actions.
D. Create a service control policy in the root organizational unit to deny access to the services or actions.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
D. Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the
maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization's access
control guidelines. See https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html.
upvoted 12 times
Most Recent
2 days, 16 hours ago
By creating an SCP in the root organizational unit, the security team can define and enforce fine-grained permissions that limit access to specific
services or actions across all member accounts. The SCP acts as a guardrail, denying access to specified services or actions, ensuring that the
permissions are consistent and applied uniformly across the organization. SCPs are scalable and provide a single point of control for managing
permissions, allowing the security team to centrally manage access restrictions without needing to modify individual account settings.
Option A and option B are not suitable for controlling access across multiple accounts in AWS Organizations. ACLs and security groups are typically
used for managing network traffic and access within a single account or a specific resource.
Option C is not the recommended approach. Cross-account roles are used for granting access, and denying access through cross-account roles can
be complex and less manageable compared to using SCPs.
upvoted 1 times
1 month ago
Selected Answer: D
I vote for option D by Creating a service control policy ( SCP) in the root organizational unit to deny access to the services or actions, meets the
requirements.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
To limit access to specific services or actions in all of the team's AWS accounts and maintain a single point where permissions can be managed, the
solutions architect should create a service control policy (SCP) in the root organizational unit to deny access to the services or actions (Option D).
Service control policies (SCPs) are policies that you can use to set fine-grained permissions for your AWS accounts within your organization. SCPs
are attached to the root of the organizational unit (OU) or to individual accounts, and they specify the permissions that are allowed or denied for
the accounts within the scope of the policy. By creating an SCP in the root organizational unit, the security team can set permissions for all of the
accounts in the organization from a single location, ensuring that the permissions are consistently applied across all accounts.
upvoted 4 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 1 times
7 months, 1 week ago
D iscorrect
upvoted 1 times
7 months, 2 weeks ago
an organization and requires single point place to manage permissions
upvoted 2 times
7 months, 2 weeks ago
Community vote distribution
D (100%)
Selected Answer: D
SCP for organization
upvoted 2 times
Topic 1
Question #169
A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load
Balancer (ALB). A solutions architect must reduce the risk of DDoS attacks against the application.
What should the solutions architect do to meet this requirement?
A. Add an Amazon Inspector agent to the ALB.
B. Con gure Amazon Macie to prevent attacks.
C. Enable AWS Shield Advanced to prevent attacks.
D. Con gure Amazon GuardDuty to monitor the ALB.
Correct Answer:
C
2 days, 16 hours ago
Selected Answer: C
By enabling Shield Advanced, the web application benefits from automatic protection against common and sophisticated DDoS attacks. It utilizes
advanced detection and mitigation techniques, including ML algorithms and traffic analysis, to provide effective DDoS protection.
It also includes features like real-time monitoring, attack notifications, and detailed attack reports.
A is not related to DDoS protection. Amazon Inspector is a security assessment service that helps identify vulnerabilities and security issues in
applications and EC2.
B is also not the appropriate solution. Macie is a service that uses machine learning to discover, classify, and protect sensitive data stored in AWS. It
focuses on data security and protection, not specifically on DDoS prevention.
D is not the most effective solution. GuardDuty is a threat detection service that analyzes events and network traffic to identify potential security
threats and anomalies. While it can provide insights into potential DDoS attacks, it does not actively prevent or mitigate them.
upvoted 1 times
1 month, 2 weeks ago
What's going on, suddenly the questions are so easy
upvoted 3 times
6 months ago
Explained in details here https://medium.com/@tshemku/aws-waf-vs-firewall-manager-vs-shield-vs-shield-advanced-4c86911e94c6
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
To reduce the risk of DDoS attacks against the application, the solutions architect should enable AWS Shield Advanced (Option C).
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that helps protect web applications running on AWS from DDoS
attacks. AWS Shield Advanced is an additional layer of protection that provides enhanced DDoS protection capabilities, including proactive
monitoring and automatic inline mitigations, to help protect against even the largest and most sophisticated DDoS attacks. By enabling AWS Shield
Advanced, the solutions architect can help protect the application from DDoS attacks and reduce the risk of disruption to the application.
upvoted 4 times
6 months, 1 week ago
Selected Answer: C
C is right answer
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
AWS Shield Advanced
upvoted 3 times
7 months, 2 weeks ago
DDOS = AWS Shield
Community vote distribution
C (100%)
upvoted 4 times
Topic 1
Question #170
A company’s web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy,
which now requires the application to be accessed from one speci c country only.
Which con guration will meet this requirement?
A. Con gure the security group for the EC2 instances.
B. Con gure the security group on the Application Load Balancer.
C. Con gure AWS WAF on the Application Load Balancer in a VPC.
D. Con gure the network ACL for the subnet that contains the EC2 instances.
Correct Answer:
C
Highly Voted
7 months, 1 week ago
Selected Answer: C
Geographic (Geo) Match Conditions in AWS WAF. This new condition type allows you to use AWS WAF to restrict application access based on the
geographic location of your viewers. With geo match conditions you can choose the countries from which AWS WAF should allow access.
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
upvoted 13 times
Most Recent
2 days, 16 hours ago
Selected Answer: C
By configuring AWS WAF on the ALB in a VPC, you can apply access control rules based on the geographic location of the incoming requests. AWS
WAF allows you to create rules that include conditions based on the IP addresses' country of origin. You can specify the desired country and deny
access to requests originating from any other country by leveraging AWS WAF's Geo Match feature.
Option A and option B focus on network-level access control and do not provide country-specific filtering capabilities.
Option D is not the ideal solution for restricting access based on country. Network ACLs primarily control traffic at the subnet level based on IP
addresses and port numbers, but they do not have built-in capabilities for country-based filtering.
upvoted 1 times
1 month ago
Configure AWS WAF for Geo Match Policy
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Source from an AWS link
Geographic (Geo) Match Conditions in AWS WAF. This condition type allows you to use AWS WAF to restrict application access based on the
geographic location of your viewers.
With geo match conditions you can choose the countries from which AWS WAF should allow access.
upvoted 2 times
6 months ago
Selected Answer: C
WAF Shield Advanced for DDOS,
GuardDuty is a continuous monitoring service that alerts you of potential threats, while Inspector is a one-time assessment service that provides a
report of vulnerabilities and deviations from best practices.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
To meet the requirement of allowing the web application to be accessed from one specific country only, the company should configure AWS WAF
(Web Application Firewall) on the Application Load Balancer in a VPC (Option C).
AWS WAF is a web application firewall service that helps protect web applications from common web exploits that could affect application
availability, compromise security, or consume excessive resources. AWS WAF allows you to create rules that block or allow traffic based on the
values of specific request parameters, such as IP address, HTTP header, or query string value. By configuring AWS WAF on the Application Load
Balancer and creating rules that allow traffic from a specific country, the company can ensure that the web application is only accessible from that
country.
upvoted 3 times
Community vote distribution
C (100%)
6 months, 1 week ago
Selected Answer: C
OptionC. Configure WAF for Geo Match Policy
upvoted 1 times
7 months, 1 week ago
C is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: C
C
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
upvoted 2 times
7 months, 2 weeks ago
C. WAF with ALB is the right option
upvoted 1 times
Topic 1
Question #171
A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger
number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is
scalable and elastic.
What should the solutions architect do to accomplish this?
A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made.
B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax
computations.
C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received
item names.
D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and
passes the item names to the EC2 instance for tax computations.
Correct Answer:
D
Highly Voted
5 months, 1 week ago
Selected Answer: B
Option D is similar to option B in that it uses Amazon API Gateway to handle the API requests, but it also includes an EC2 instance to perform the
tax computations. However, using an EC2 instance in this way is less scalable and less elastic than using AWS Lambda to perform the computations.
An EC2 instance is a fixed resource and requires manual scaling and management, while Lambda is an event-driven, serverless compute service that
automatically scales with the number of requests, making it more suitable for handling variable workloads and reducing response times during
high traffic periods. Additionally, Lambda is more cost-efficient than EC2 instances, as you only pay for the compute time consumed by your
functions, making it a more cost-effective solution.
upvoted 11 times
Most Recent
2 days, 16 hours ago
Selected Answer: B
Option A (hosting an API on an Amazon EC2 instance) would require manual management and scaling of the EC2 instances, making it less scalable
and elastic compared to a serverless solution.
Option C (creating an Application Load Balancer with EC2 instances for tax computations) also involves manual management of the instances and
does not offer the same level of scalability and elasticity as a serverless solution.
Option D (designing a REST API using API Gateway and connecting it with an API hosted on an EC2 instance) adds unnecessary complexity and
management overhead. It is more efficient to directly integrate API Gateway with AWS Lambda for tax computations.
Therefore, designing a REST API using Amazon API Gateway and integrating it with AWS Lambda (option B) is the recommended approach to
achieve a scalable and elastic solution for the company's API during the holiday season.
upvoted 1 times
1 month ago
Selected Answer: B
Option B is the solution that is scalable and elastic, hence this meets requirements.
upvoted 1 times
2 months ago
Selected Answer: B
I also prefer B over D. However, it is quite vague since the question doesn't provide the processing time. The maximum processing time for AWS
Lambda is 15 minutes.
upvoted 1 times
5 months ago
B. Serverless option wins over EC2
upvoted 4 times
6 months ago
Lambda is serverless is scalable so answer should be B.
upvoted 2 times
6 months, 1 week ago
Community vote distribution
B (93%)
7%
Selected Answer: D
To design a scalable and elastic solution for providing an API for tax computations, the solutions architect should design a REST API using Amazon
API Gateway that connects with an API hosted on an Amazon EC2 instance (Option D).
API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. By designing a REST
API using API Gateway, the solutions architect can create an API that is scalable, flexible, and easy to use. The API Gateway can accept and pass the
item names to the EC2 instance for tax computations, and the EC2 instance can perform the required computations when the API request is made.
upvoted 2 times
1 week, 4 days ago
You are only explained the "front" part of scalable, unless you have end to end scalable solution it doesn't matter how scalable is your front end.
Here in D it ONLY covers the api front end but the constraint is EC2 instance which is ONE and not in a scalable mode. I think B is more suitable
given how little information is provided.
upvoted 1 times
6 months, 1 week ago
Option A (providing an API hosted on an EC2 instance) would not be a suitable solution as it may not be scalable or elastic enough to handle
the increased demand during the holiday season.
Option B (designing a REST API using API Gateway that passes item names to Lambda for tax computations) would not be a suitable solution as
it may not be suitable for computations that require a larger amount of resources or longer execution times.
Option C (creating an Application Load Balancer with two EC2 instances behind it) would not be a suitable solution as it may not provide the
necessary scalability and elasticity. Additionally, it would not provide the benefits of using API Gateway, such as API management and
monitoring capabilities.
upvoted 1 times
5 months, 3 weeks ago
But Option D is not scalable. The requirements state "A solutions architect needs to design a solution that is scalable and elastic". D fails to
meet these requirements. C on the other hand is scalable. There is nothing in the question to suggest that a longer execution than lambda
can handle happens. Therefore D is wrong, and C is possible.
upvoted 2 times
5 months, 3 weeks ago
Sorry, it should say "Therefore D is wrong, and B is possible."
upvoted 2 times
6 months, 1 week ago
B is the option
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B. Though D is also possible B is more scalable as Lambda will autoscale to meet the dynamic load.
upvoted 4 times
6 months, 3 weeks ago
Selected Answer: B
B. Lambda scales much better
upvoted 2 times
7 months ago
B is the correct ans
upvoted 1 times
7 months ago
Selected Answer: B
B is correct, lamba is a better choice
upvoted 1 times
7 months ago
B is the right answer
upvoted 2 times
7 months, 1 week ago
B is correct
upvoted 2 times
7 months, 1 week ago
Seems like B is the correct option
upvoted 4 times
7 months, 2 weeks ago
Selected Answer: B
Lambda
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/35849-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
Topic 1
Question #172
A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is
sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should.be protected throughout the entire
application stack, and access to the information should be restricted to certain applications.
Which action should the solutions architect take?
A. Con gure a CloudFront signed URL.
B. Con gure a CloudFront signed cookie.
C. Con gure a CloudFront eld-level encryption pro le.
D. Con gure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy.
Correct Answer:
A
Highly Voted
7 months, 1 week ago
CCCCCCCCC
Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive information
provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application stack. This encryption
ensures that only applications that need the data—and have the credentials to decrypt it—are able to do so.
upvoted 28 times
Most Recent
2 days, 16 hours ago
Selected Answer: C
Option A and Option B are used for controlling access to specific resources or content based on signed URLs or cookies. While they provide
security and access control, they do not provide field-level encryption for sensitive data within the requests.
Option D ensures that communication between the viewer and CloudFront is encrypted with HTTPS. However, it does not specifically address the
protection and encryption of sensitive information within the application stack.
Therefore, the most appropriate action to protect sensitive information throughout the entire application stack and restrict access to certain
applications is to configure a CloudFront field-level encryption profile (Option C).
upvoted 1 times
1 month ago
Selected Answer: C
With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using HTTPS. Field-level encryption adds an
additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html
"Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive information
provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application stack".
upvoted 2 times
4 months, 3 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-levelencryption.
html
"With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using
HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data
throughout system processing so that only certain applications can see it."
upvoted 3 times
5 months ago
C, field-level encryption should be used when necessary to protect sensitive data.
upvoted 1 times
5 months, 2 weeks ago
It should be C
upvoted 2 times
Community vote distribution
C (76%)
B (24%)
5 months, 3 weeks ago
Selected Answer: C
C!
CloudFront’s field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys (which you supply) before a
POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain components or services in
your application stack.
https://aws.amazon.com/about-aws/whats-new/2017/12/introducing-field-level-encryption-on-amazon-cloudfront/
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: C
Field-Level Encryption allows you to securely upload user-submitted sensitive information to your web servers. x Signed cookie - provides access to
download multiple private files (from Tutorial Dojo)
upvoted 1 times
5 months, 3 weeks ago
C = Answer
I concur. why? CloudFront's field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys (which you
supply) before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain
components or services in your application stack.
upvoted 2 times
5 months, 3 weeks ago
Selected Answer: B
he correct answer is B. Configure a CloudFront signed cookie.
CloudFront signed cookies can be used to protect sensitive information by requiring users to authenticate with a signed cookie before they can
access content that is served through CloudFront. This can be used to restrict access to certain applications and ensure that the sensitive
information is protected throughout the entire application stack.
Option A, Configure a CloudFront signed URL, would also provide an additional layer of security by requiring users to authenticate with a signed
URL before they can access content served through CloudFront. However, this option would not protect the sensitive information throughout the
entire application stack.
upvoted 1 times
5 months, 3 weeks ago
Option C, Configure a CloudFront field-level encryption profile, can be used to protect sensitive information that is stored in Amazon S3 and
served through CloudFront. However, this option would not provide an additional layer of security for the entire application stack.
upvoted 1 times
5 months, 3 weeks ago
CloudFront signed cookie are used to control user access to sensitive documents but that is not what is required. "Some of the information
submitted by users is sensitive" This is what you are looking to protect, when it's in the system, (not when users are trying to access it and
this is not mentioned in the Q).
Field-level encryption encrypts sensitive data ... This ensures sensitive data can only be decrypted and viewed by certain components or
services. (q states "access to the information should be restricted to certain applications."), so C is a perfect match
upvoted 1 times
6 months ago
Selected Answer: B
configuring a CloudFront signed cookie is a better solution for protecting sensitive information and restricting access to certain applications
throughout the entire application stack, This will allow them to restrict access to content based on the viewer’s identity and ensure that the
sensitive information is protected throughout the entire application stack
upvoted 1 times
6 months ago
Selected Answer: C
Option B, "Configure a CloudFront signed cookie," is not a suitable solution for this scenario because signed cookies are used to grant temporary
access to specific content in your CloudFront distribution. They do not provide an additional layer of security for the sensitive information
submitted by users, nor do they allow you to restrict access to certain applications.
upvoted 1 times
6 months ago
Selected Answer: B
Field-level encryption profiles, which you create in CloudFront, define the fields that you want to be encrypted.
upvoted 1 times
6 months ago
Use signed URLs in the following cases:
You want to restrict access to individual files, for example, an installation download for your application.
Your users are using a client (for example, a custom HTTP client) that doesn't support cookies.
Use signed cookies in the following cases:
You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area
of website.
You don't want to change your current URLs.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
To protect sensitive information throughout the entire application stack and restrict access to certain applications, the solutions architect should
configure a CloudFront signed cookie (Option B).
CloudFront signed cookies are a feature of CloudFront that allows you to limit access to content in your distribution by requiring users to present a
valid cookie with a signed value. By creating a signed cookie and requiring users to present the cookie in order to access the content, you can
restrict access to the content to only those users who have a valid cookie. This can help protect sensitive information throughout the entire
application stack and ensure that only authorized applications have access to the information.
upvoted 3 times
6 months ago
Field-level encryption profiles, which you create in CloudFront, define the fields that you want to be encrypted.
upvoted 1 times
6 months, 1 week ago
Option A (configuring a CloudFront signed URL) would not be a suitable solution as signed URLs are temporary URLs that allow users to access
specific objects in an S3 bucket or a custom origin without requiring AWS credentials. While signed URLs can be useful for providing limited and
secure access to specific objects, they are not designed for protecting content throughout the entire application stack or for restricting access to
certain applications.
Option C (configuring a CloudFront field-level encryption profile) would not be a suitable solution as field-level encryption is a feature of
CloudFront that allows you to encrypt specific fields in an HTTP request or response, rather than the entire content. While field-level encryption
can be useful for protecting specific fields of sensitive information, it is not designed for protecting the entire content or for restricting access to
certain applications.
upvoted 1 times
5 months, 3 weeks ago
You are not told that the entire content requires protection, just some sensitive information.
And yes "Field-level encryption ensures ... sensitive data can only be decrypted and viewed by certain components or services" so does
achieve the requirements.
upvoted 1 times
6 months, 1 week ago
Option D (configuring CloudFront and setting the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy) would not be a
suitable solution as the Origin Protocol Policy setting determines whether CloudFront sends HTTP or HTTPS requests to the origin, rather
than protecting the content or restricting access to certain applications.
upvoted 1 times
6 months, 1 week ago
C is the option
upvoted 1 times
Topic 1
Question #173
A gaming company hosts a browser-based application on AWS. The users of the application consume a large number of videos and images that
are stored in Amazon S3. This content is the same for all users.
The application has increased in popularity, and millions of users worldwide accessing these media les. The company wants to provide the les
to the users while reducing the load on the origin.
Which solution meets these requirements MOST cost-effectively?
A. Deploy an AWS Global Accelerator accelerator in front of the web servers.
B. Deploy an Amazon CloudFront web distribution in front of the S3 bucket.
C. Deploy an Amazon ElastiCache for Redis instance in front of the web servers.
D. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
B. Cloud front is best for content delivery. Global Accelerator is best for non-HTTP (TCP/UDP) cases and supports HTTP cases as well but with static
IP (elastic IP) or anycast IP address only.
upvoted 16 times
Most Recent
2 days, 16 hours ago
Selected Answer: B
Option A is not the most cost-effective solution for this scenario. While Global Accelerator can improve global application performance, it is
primarily used for accelerating TCP and UDP traffic, such as gaming and real-time applications, rather than serving static media files.
Options C and D are used for caching frequently accessed data in-memory to improve application performance. However, they are not specifically
designed for caching and serving media files like CloudFront, and therefore, may not provide the same cost-effectiveness and scalability for this
use case.
Hence, deploying an CloudFront web distribution in front of the S3 is the most cost-effective solution for delivering media files to millions of users
worldwide while reducing the load on the origin.
upvoted 1 times
2 months ago
Selected Answer: B
ElastiCache, enhances the performance of web applications by quickly retrieving information from fully-managed in-memory data stores. It utilizes
Memcached and Redis, and manages to considerably reduce the time your applications would, otherwise, take to read data from disk-based
databases.
Amazon CloudFront supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control Protocol (TCP)
protocol. Common use cases include dynamic API calls, web pages and web applications, as well as an application's static files such as audio and
images. It also supports on-demand media streaming over HTTP.
AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for non-HTTP use cases, such
as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
The company wants to provide the files to the users while reducing the load on the origin.
Cloudfront speeds-up content delivery but I'm not sure it reduces the load on the origin.
Some form of caching would cache content and deliver to users without going to the origin for each request.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
To provide media files to users while reducing the load on the origin and meeting the requirements cost-effectively, the gaming company should
deploy an Amazon CloudFront web distribution in front of the S3 bucket (Option B).
CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, such as images and videos, to users.
By using CloudFront, the media files will be served to users from the edge location that is closest to them, resulting in faster delivery and a better
Community vote distribution
B (89%)
11%
user experience. CloudFront can also handle the high traffic and large number of requests expected from the millions of users, ensuring that the
media files are available and accessible to users around the world.
upvoted 3 times
6 months ago
Please dont post ChatGPT answers here,chatgpt keeps on changing its answers,its not the right way to copy paste,thanks.
upvoted 2 times
3 months, 4 weeks ago
why not? if the answers are correct and offer best possible explanation for the wrong options, I see no reason why it shouldn't be posted
here. Also, most of his answers were right, although reasons for the wrong options were sometimes lacking, but all in all, his responses were
very good.
upvoted 1 times
4 months, 4 weeks ago
Woaaaa! I always wondered where this kind of logic and explanation came from in this guy's answers. Nice catch TECHHB!
upvoted 2 times
4 months, 4 weeks ago
Answers are mostly correct. Only a small percentage were wrong
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Agreed
upvoted 1 times
7 months ago
Selected Answer: B
B is the correct answer
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
Topic 1
Question #174
A company has a multi-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone
behind an Application Load Balancer (ALB). A solutions architect needs to modify the infrastructure to be highly available without modifying the
application.
Which architecture should the solutions architect choose that provides high availability?
A. Create an Auto Scaling group that uses three instances across each of two Regions.
B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
D. Change the ALB in front of the Amazon EC2 instances in a round-robin con guration to balance tra c to the web tier.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
B. auto scaling groups can not span multi region
upvoted 21 times
Most Recent
2 days, 15 hours ago
Selected Answer: B
Option A (creating an Auto Scaling group across two Regions) introduces additional complexity and potential replication challenges, which may not
be necessary for achieving high availability within a single Region.
Option C (creating an Auto Scaling template for another Region) suggests multi-region redundancy, which may not be the most straightforward
solution for achieving high availability without modifying the application.
Option D (changing the ALB to a round-robin configuration) does not provide the desired high availability. Round-robin configuration alone does
not ensure fault tolerance and does not leverage multiple Availability Zones for resilience.
Hence, modifying the Auto Scaling group to use three instances across each of two Availability Zones is the appropriate choice to provide high
availability for the multi-tier application.
upvoted 1 times
6 months ago
B. auto scaling groups cannot span multi region
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B. Modify the Auto Scaling group to use three instances across each of the two Availability Zones.
This option would provide high availability by distributing the front-end web servers across multiple Availability Zones. If there is an issue with one
Availability Zone, the other Availability Zone would still be available to serve traffic. This would ensure that the application remains available and
highly available even if there is a failure in one of the Availability Zones.
upvoted 4 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Agreed
upvoted 1 times
6 months, 2 weeks ago
B
option B This architecture provides high availability by having multiple Availability Zones hosting the same application. This allows for redundancy
in case one Availability Zone experiences downtime, as traffic can be served by the other Availability Zone. This solution also increases scalability
and performance by allowing traffic to be spread across two Availability Zones.
upvoted 1 times
Community vote distribution
B (100%)
7 months ago
Selected Answer: B
B is rightt
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
B auto scaling i multiple AZ
upvoted 1 times
Topic 1
Question #175
An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application
stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some
customers experienced timeouts, and the application did not process the orders of those customers.
A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open
connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.
Which solution will meet these requirements?
A. Con gure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.
B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the
database endpoint.
C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route tra c to the read
replica.
D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda
function to use the DynamoDB table.
Correct Answer:
B
Highly Voted
7 months, 1 week ago
Selected Answer: B
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server
and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows
applications to pool and share connections established with the database, improving database efficiency and application scalability.
https://aws.amazon.com/id/rds/proxy/
upvoted 20 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: B
Issue related to opening many connections and the solution requires least code changes so B satisfies the conditions
upvoted 6 times
Most Recent
2 days, 15 hours ago
Selected Answer: B
Option A (configuring provisioned concurrency and creating a global database) does not directly address the high connection utilization issue on
the database, and creating a global database may introduce additional complexity without immediate benefit to solving the timeout errors.
Option C (creating a read replica in a different AWS Region) introduces additional data replication and management complexity, which may not be
necessary to address the timeout errors.
Option D (migrating to Amazon DynamoDB) involves a significant change in the data storage technology and requires modifying the application to
use DynamoDB instead of Aurora PostgreSQL. This may not be the most suitable solution when the goal is to make minimal changes to the
application.
Therefore, using Amazon RDS Proxy and modifying the Lambda function to use the RDS Proxy endpoint is the recommended solution to prevent
timeout errors and reduce the impact on the database during peak loads.
upvoted 1 times
2 months, 3 weeks ago
its there anyone that would love to share his/her contributor access? please write me frankobinnaeze@gmail.com thanks
upvoted 1 times
5 months, 1 week ago
I also think the answer is B. However can RDS Proxy be used with Amazon Aurora PostgreSQL database?
upvoted 1 times
4 months ago
RDS Proxy can be used with Aurora
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
upvoted 1 times
Community vote distribution
B (100%)
5 months, 3 weeks ago
Selected Answer: B
I expect a answer with database replica but there is not, so B is most suitable
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the
database endpoint.
Using Amazon RDS Proxy can help reduce the number of connections to the database and improve the performance of the application. RDS Proxy
establishes a connection pool to the database and routes connections to the available connections in the pool. This can help reduce the number of
open connections to the database and improve the performance of the application. The Lambda function can be modified to use the RDS Proxy
endpoint instead of the database endpoint to take advantage of this improvement.
upvoted 1 times
6 months, 1 week ago
Option A is not a valid solution because configuring provisioned concurrency for the Lambda function does not address the issue of high CPU
utilization and memory utilization on the database.
Option C is not a valid solution because creating a read replica in a different Region does not address the issue of high CPU utilization and
memory utilization on the database.
Option D is not a valid solution because migrating the data from Aurora PostgreSQL to DynamoDB would require significant changes to the
application and may not be the best solution for this particular problem.
upvoted 2 times
6 months, 1 week ago
Option --- B
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
As it is mentioned that issue was due to high CPU and Memory due to many open corrections to DB, B is the right answer.
upvoted 1 times
6 months, 2 weeks ago
B
Using Amazon RDS Proxy will allow the application to handle more connections and higher loads without timeouts, while making the least possible
changes to the application. The RDS Proxy will enable connection pooling, allowing multiple connections from the Lambda function to be served
from a single proxy connection. This will reduce the number of open connections on the database, which is causing high CPU and memory
utilization
upvoted 3 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
B - Proxy to manage connections
upvoted 2 times
7 months, 2 weeks ago
Correct B
upvoted 1 times
Topic 1
Question #176
An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table.
What is the MOST secure way to access the table while ensuring that the tra c does not leave the AWS network?
A. Use a VPC endpoint for DynamoDB.
B. Use a NAT gateway in a public subnet.
C. Use a NAT instance in a private subnet.
D. Use the internet gateway attached to the VPC.
Correct Answer:
D
Highly Voted
7 months, 2 weeks ago
Selected Answer: A
VPC endpoints for service in private subnets
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
upvoted 7 times
Most Recent
2 days, 14 hours ago
Option B (using a NAT gateway in a public subnet) and option C (using a NAT instance in a private subnet) are not the most secure options because
they involve routing traffic through a network address translation (NAT) device, which requires an internet gateway and traverses the public
internet.
Option D (using the internet gateway attached to the VPC) would require routing traffic through the internet gateway, which would result in the
traffic leaving the AWS network.
Therefore, the recommended and most secure approach is to use a VPC endpoint for DynamoDB to ensure private and secure access to the
DynamoDB table from your EC2 instances in private subnets, without the need to traverse the internet or leave the AWS network.
upvoted 1 times
1 week, 4 days ago
VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use
their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP addresses, and
you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to
DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.
upvoted 1 times
1 month, 2 weeks ago
AAAAAAAAA
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: A
Option A: Use a VPC endpoint for DynamoDB - This is the correct option. A VPC endpoint for DynamoDB allows communication between resources
in your VPC and Amazon DynamoDB without traversing the internet or a NAT instance, which is more secure.
upvoted 2 times
3 months, 2 weeks ago
A
The most secure way to access an Amazon DynamoDB table from Amazon EC2 instances in private subnets while ensuring that the traffic does not
leave the AWS network is to use Amazon VPC Endpoints for DynamoDB.
Amazon VPC Endpoints enable private communication between Amazon EC2 instances in a VPC and Amazon services such as DynamoDB, without
the need for an internet gateway, NAT device, or VPN connection. When you create a VPC endpoint for DynamoDB, traffic from the EC2 instances
to the DynamoDB table remains within the AWS network and does not traverse the public internet.
upvoted 1 times
4 months, 2 weeks ago
private...backend Answer A
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: A
Community vote distribution
A (100%)
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpointsdynamodb.
html A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use
their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2
instances do not require public IP addresses, and you don't need an internet gateway, a NAT device,
or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB.
Traffic between your VPC and the AWS service does not leave the Amazon network.
upvoted 2 times
5 months ago
ExamTopics.com should be sued for this answer tagged as Correct answer.
upvoted 4 times
6 months ago
Selected Answer: A
A is correct. VPC end point. D exposed to the internet
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
The most secure way to access the DynamoDB table while ensuring that the traffic does not leave the AWS network is Option A (Use a VPC
endpoint for DynamoDB.)
A VPC endpoint for DynamoDB allows you to privately connect your VPC to the DynamoDB service without requiring an Internet Gateway, VPN
connection, or AWS Direct Connect connection. This ensures that the traffic between the application and the DynamoDB table stays within the AWS
network and is not exposed to the public Internet.
upvoted 2 times
6 months, 1 week ago
Option B, using a NAT gateway in a public subnet, would allow the traffic to leave the AWS network and traverse the public Internet, which is
less secure.
Option C, using a NAT instance in a private subnet, would also allow the traffic to leave the AWS network but would require you to manage the
NAT instance yourself.
Option D, using the internet gateway attached to the VPC, would also expose the traffic to the public Internet.
upvoted 2 times
6 months, 1 week ago
A ---- is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A.
upvoted 1 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 1 week ago
Sure A
upvoted 1 times
7 months, 1 week ago
Selected Answer: A
A - VPC endpoint
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: A
A - VPC endpoint
upvoted 3 times
Topic 1
Question #177
An entertainment company is using Amazon DynamoDB to store media metadata. The application is read intensive and experiencing delays. The
company does not have staff to handle additional operational overhead and needs to improve the performance e ciency of DynamoDB without
recon guring the application.
What should a solutions architect recommend to meet this requirement?
A. Use Amazon ElastiCache for Redis.
B. Use Amazon DynamoDB Accelerator (DAX).
C. Replicate data by using DynamoDB global tables.
D. Use Amazon ElastiCache for Memcached with Auto Discovery enabled.
Correct Answer:
B
Highly Voted
6 months ago
Selected Answer: B
DAX stands for DynamoDB Accelerator, and it's like a turbo boost for your DynamoDB tables. It's a fully managed, in-memory cache that speeds up
the read and write performance of your DynamoDB tables, so you can get your data faster than ever before.
upvoted 10 times
Most Recent
2 days, 14 hours ago
Selected Answer: B
A. Using Amazon ElastiCache for Redis would require modifying the application code and is not specifically designed to enhance DynamoDB
performance.
C. Replicating data with DynamoDB global tables would require additional configuration and operational overhead.
D. Using Amazon ElastiCache for Memcached with Auto Discovery enabled would also require application code modifications and is not specifically
designed for improving DynamoDB performance.
In contrast, option B, using Amazon DynamoDB Accelerator (DAX), is the recommended solution as it is purpose-built for enhancing DynamoDB
performance without the need for application reconfiguration. DAX provides a managed caching layer that significantly reduces read latency and
offloads traffic from DynamoDB tables.
upvoted 1 times
4 weeks ago
Selected Answer: B
improve the performance efficiency of DynamoDB
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that helps improve the read
performance of DynamoDB tables. DAX provides a caching layer between the application and DynamoDB, reducing the number of read requests
made directly to DynamoDB. This can significantly reduce read latencies and improve overall application performance.
upvoted 2 times
3 months ago
B-->Applications that are read-intensive===>https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html#DAX.use-cases
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
DynamoDB Accelerator, less over head.
upvoted 2 times
5 months, 1 week ago
Option B is incorrect as the constraint in the question is not to recode the application. DAX requires application to be reconfigured and point to
DAX instead of DynamoDB
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.client.modify-your-app.html
Answer should be A
upvoted 2 times
Community vote distribution
B (100%)
6 months, 1 week ago
Selected Answer: B
To improve the performance efficiency of DynamoDB without reconfiguring the application, a solutions architect should recommend using Amazon
DynamoDB Accelerator (DAX) which is Option B as the correct answer.
DAX is a fully managed, in-memory cache that can be used to improve the performance of read-intensive workloads on DynamoDB. DAX stores
frequently accessed data in memory, allowing the application to retrieve data from the cache rather than making a request to DynamoDB. This can
significantly reduce the number of read requests made to DynamoDB, improving the performance and reducing the latency of the application.
upvoted 3 times
6 months, 1 week ago
Option A, using Amazon ElastiCache for Redis, would not be a good fit because it is not specifically designed for use with DynamoDB and would
require reconfiguring the application to use it.
Option C, replicating data using DynamoDB global tables, would not directly improve the performance of reading requests and would require
additional operational overhead to maintain the replication.
Option D, using Amazon ElastiCache for Memcached with Auto Discovery enabled, would also not be a good fit because it is not specifically
designed for use with DynamoDB and would require reconfiguring the application to use it.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: B
Agreed
upvoted 2 times
6 months, 2 weeks ago
B
DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers lightning-fast performance and consistent low-latency
responses. It provides fast performance without requiring any application reconfiguration
upvoted 3 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
DAX is the cache for this
upvoted 1 times
7 months, 2 weeks ago
B is correct, DAX provides caching + no changes
upvoted 2 times
Topic 1
Question #178
A company’s infrastructure consists of Amazon EC2 instances and an Amazon RDS DB instance in a single AWS Region. The company wants to
back up its data in a separate Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Backup to copy EC2 backups and RDS backups to the separate Region.
B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.
C. Create Amazon Machine Images (AMIs) of the EC2 instances. Copy the AMIs to the separate Region. Create a read replica for the RDS DB
instance in the separate Region.
D. Create Amazon Elastic Block Store (Amazon EBS) snapshots. Copy the EBS snapshots to the separate Region. Create RDS snapshots.
Export the RDS snapshots to Amazon S3. Con gure S3 Cross-Region Replication (CRR) to the separate Region.
Correct Answer:
A
2 days, 14 hours ago
Selected Answer: A
Using AWS Backup to copy EC2 and RDS backups to the separate Region is the solution that meets the requirements with the least operational
overhead. AWS Backup simplifies the backup process and automates the copying of backups to another Region, reducing the manual effort and
operational complexity involved in managing separate backup processes for EC2 instances and RDS databases.
Option B is incorrect because Amazon Data Lifecycle Manager (Amazon DLM) is not designed for directly copying RDS backups to a separate
region.
Option C is incorrect because creating Amazon Machine Images (AMIs) and read replicas adds complexity and operational overhead compared to a
dedicated backup solution.
Option D is incorrect because using Amazon EBS snapshots, RDS snapshots, and S3 Cross-Region Replication (CRR) involves multiple manual steps
and additional configuration, increasing complexity.
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: A
A is correct
upvoted 2 times
2 months ago
Selected Answer: A
Option B, using Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region, would require
more operational overhead because DLM is primarily designed for managing the lifecycle of Amazon EBS snapshots, and would require additional
configuration to manage RDS backups.
Option C, creating AMIs of the EC2 instances and read replicas of the RDS DB instance in the separate Region, would require more manual effort to
manage the backup and disaster recovery process, as it requires manual creation and management of AMIs and read replicas.
upvoted 2 times
2 months ago
Option D, creating EBS snapshots and RDS snapshots, exporting them to Amazon S3, and configuring S3 Cross-Region Replication (CRR) to the
separate Region, would require more configuration and management effort. Additionally, S3 CRR can have additional charges for data transfer
and storage in the destination region.
Therefore, option A is the best choice for meeting the company's requirements with the least operational overhead.
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
Option A, using AWS Backup to copy EC2 backups and RDS backups to the separate region, is the correct answer for the given scenario.
Using AWS Backup is a simple and efficient way to backup EC2 instances and RDS databases to a separate region. It requires minimal operational
overhead and can be easily managed through the AWS Backup console or API. AWS Backup can also provide automated scheduling and retention
management for backups, which can help ensure that backups are always available and up to date.
upvoted 2 times
Community vote distribution
A (95%)
5%
5 months, 3 weeks ago
Selected Answer: A
Cross-Region backup
Using AWS Backup, you can copy backups to multiple different AWS Regions on demand or automatically as part of a scheduled backup plan.
Cross-Region backup is particularly valuable if you have business continuity or compliance requirements to store backups a minimum distance
away from your production data.
https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
upvoted 4 times
5 months, 4 weeks ago
A is correct - you need to find a backup solution for EC2 and RDS. DLM doent work with RDS , only with snapshots.
upvoted 1 times
6 months ago
Selected Answer: A
using Amazon DLM to copy EC2 backups and RDS backups to the separate region, is not a valid solution because Amazon DLM does not support
backing up data across regions.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDS backups to the separate Region.
Amazon DLM is a fully managed service that helps automate the creation and retention of Amazon EBS snapshots and RDS DB snapshots. It can be
used to create and manage backup policies that specify when and how often snapshots should be created, as well as how long they should be
retained. With Amazon DLM, you can easily and automatically create and manage backups of your EC2 instances and RDS DB instances in a
separate Region, with minimal operational overhead.
upvoted 1 times
1 month ago
AWS DLM does not support RDS backups, only works with EBS storage. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-
lifecycle.html
upvoted 1 times
5 months, 3 weeks ago
Buruguduystunstugudunstuy, sorry, but I haven’t found any info about copying RDS backups by DLM. The DLM works only with EBS.
So the only answer is A - AWS Backup
upvoted 1 times
6 months, 1 week ago
Option A, using AWS Backup to copy EC2 backups and RDS backups to the separate Region, would also work, but it may require more manual
configuration and management.
Option C, creating AMIs of the EC2 instances and copying them to the separate Region, and creating a read replica for the RDS DB instance in
the separate Region, would work, but it may require more manual effort to set up and maintain.
Option D, creating EBS snapshots and copying them to the separate Region, creating RDS snapshots, and exporting them to Amazon S3, and
configuring S3 CRR to the separate Region, would also work, but it would involve multiple steps and may require more manual effort to set up
and maintain. Overall, using Amazon DLM is likely to be the easiest and most efficient option for meeting the requirements with the least
operational overhead.
upvoted 1 times
5 months, 2 weeks ago
This guy is giving wrong answers in detail...lol
upvoted 4 times
6 months ago
Some of your answers are very detailed. Can you back them up with a reference?
upvoted 1 times
5 months ago
All of their answers are from ChatGPT
upvoted 5 times
6 months ago
using Amazon DLM to copy EC2 backups and RDS backups to the separate region, is not a valid solution because Amazon DLM does not
support backing up data across regions.
upvoted 4 times
5 months, 1 week ago
I choose A, but DLM support cross regions. DLM doesn't support RDS. Cross region copy rules it's a feature of DLM ("For each schedule,
you can define the frequency, fast snapshot restore settings (snapshot lifecycle policies only), cross-Region copy rules, and tags")
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
upvoted 1 times
6 months ago
Thanks techhb
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A as it is fully managed service with least operational overhead
upvoted 1 times
6 months, 2 weeks ago
A
AWS Backup is a fully managed service that handles the process of copying backups to a separate Region automatically
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
Ans A with least operational overhead
upvoted 1 times
7 months, 2 weeks ago
AWS Backup supports Supports cross-region backups
upvoted 3 times
7 months, 2 weeks ago
Selected Answer: A
Option A
Aws back up supports , EC2, RDS
upvoted 3 times
7 months, 2 weeks ago
AWS Backup suports Supports cross-region backups
upvoted 1 times
Topic 1
Question #179
A solutions architect needs to securely store a database user name and password that an application uses to access an Amazon RDS DB
instance. The application that accesses the database runs on an Amazon EC2 instance. The solutions architect wants to create a secure
parameter in AWS Systems Manager Parameter Store.
What should the solutions architect do to meet this requirement?
A. Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS
KMS) key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance.
B. Create an IAM policy that allows read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service
(AWS KMS) key that is used to encrypt the parameter. Assign this IAM policy to the EC2 instance.
C. Create an IAM trust relationship between the Parameter Store parameter and the EC2 instance. Specify Amazon RDS as a principal in the
trust policy.
D. Create an IAM trust relationship between the DB instance and the EC2 instance. Specify Systems Manager as a principal in the trust policy.
Correct Answer:
A
Highly Voted
6 months, 1 week ago
Selected Answer: A
CORRECT Option A
To securely store a database user name and password in AWS Systems Manager Parameter Store and allow an application running on an EC2
instance to access it, the solutions architect should create an IAM role that has read access to the Parameter Store parameter and allow Decrypt
access to an AWS KMS key that is used to encrypt the parameter. The solutions architect should then assign this IAM role to the EC2 instance.
This approach allows the EC2 instance to access the parameter in the Parameter Store and decrypt it using the specified KMS key while enforcing
the necessary security controls to ensure that the parameter is only accessible to authorized parties.
upvoted 6 times
6 months, 1 week ago
Option B, would not be sufficient, as IAM policies cannot be directly attached to EC2 instances.
Option C, would not be a valid solution, as the Parameter Store parameter and the EC2 instance are not entities that can be related through an
IAM trust relationship.
Option D, would not be a valid solution, as the trust policy would not allow the EC2 instance to access the parameter in the Parameter Store or
decrypt it using the specified KMS key.
upvoted 4 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: A
Agree with A, IAM role is for services (EC2 for example)
IAM policy is more for users and groups
upvoted 5 times
Most Recent
2 days, 14 hours ago
Selected Answer: A
By creating an IAM role with read access to the Parameter Store parameter and Decrypt access to the associated AWS KMS key, the EC2 will have
the necessary permissions to securely retrieve and decrypt the database user name and password from the Parameter Store. This approach ensures
that the sensitive information is protected and can be accessed only by authorized entities.
Answers B, C, and D are not correct because they do not provide a secure way to store and retrieve the database user name and password from the
Parameter Store. IAM policies, trust relationships, and associations with the DB instance are not the appropriate mechanisms for securely managing
sensitive credentials in this scenario. Answer A is the correct choice as it involves creating an IAM role with the necessary permissions and assigning
it to the EC2 instance to access the Parameter Store securely.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
Community vote distribution
A (90%)
10%
2 months ago
Selected Answer: A
By creating an IAM role and assigning it to the EC2 instance, the application running on the EC2 instance can access the Parameter Store parameter
securely without the need for hard-coding the database user name and password in the application code.
The IAM role should have read access to the Parameter Store parameter and Decrypt access to an AWS KMS key that is used to encrypt the
parameter to ensure that the parameter is protected at rest.
upvoted 1 times
5 months, 3 weeks ago
There should be the Decrypt access to KMS.
"If you choose the SecureString parameter type when you create your parameter, Systems Manager uses AWS KMS to encrypt the parameter
value."
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
IAM role - for EC2
upvoted 1 times
6 months, 1 week ago
A -- is correct option
upvoted 1 times
6 months, 1 week ago
Option A.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
6 months, 2 weeks ago
Answer A
Create an IAM role that has read access to the Parameter Store parameter. Allow Decrypt access to an AWS Key Management Service (AWS KMS)
key that is used to encrypt the parameter. Assign this IAM role to the EC2 instance. This solution will allow the application to securely access the
database user name and password stored in the parameter store.
upvoted 1 times
7 months ago
Selected Answer: B
i think policy
upvoted 1 times
7 months ago
Access to Parameter Store is enabled by IAM policies and supports resource level permissions for access. An IAM policy that grants permissions
to specific parameters or a namespace can be used to limit access to these parameters. CloudTrail logs, if enabled for the service, record any
attempt to access a parameter.
upvoted 1 times
7 months ago
https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/
upvoted 1 times
5 months, 3 weeks ago
This link gives the example "Walkthrough: Securely access Parameter Store resources with IAM roles for tasks" - essentially A above. It doe
snot show how this can be done using a policy (B) alone.
upvoted 1 times
6 months, 3 weeks ago
can you attach policy to ec2 directly ?
upvoted 1 times
7 months, 1 week ago
Selected Answer: A
A. Attach IAM role to EC2 Instance
https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
Attach IAM role to EC2 Instance profile
upvoted 3 times
7 months, 2 weeks ago
Selected Answer: B
IAM policy
upvoted 1 times
Topic 1
Question #180
A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a
Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The
company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS
attacks.
Which combination of solutions provides the MOST protection? (Choose two.)
A. Use AWS WAF to protect the NLB.
B. Use AWS Shield Advanced with the NLB.
C. Use AWS WAF to protect Amazon API Gateway.
D. Use Amazon GuardDuty with AWS Shield Standard
E. Use AWS Shield Standard with Amazon API Gateway.
Correct Answer:
BC
Highly Voted
7 months, 2 weeks ago
Selected Answer: BC
Shield - Load Balancer, CF, Route53
AWF - CF, ALB, API Gateway
upvoted 31 times
1 month ago
Shield - Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.
WAF - Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync
upvoted 2 times
6 months ago
Thank u U meant WAF* - CloudFormation, right? haha
upvoted 4 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: BC
AWS Shield Advanced - DDos attacks
AWS WAF to protect Amazon API Gateway, because WAF sits before the API Gateway and then comes NLB.
upvoted 6 times
1 month, 2 weeks ago
don't agree that NLB sits before API gateway. it should be other way around
upvoted 1 times
Most Recent
2 days, 14 hours ago
B. AWS Shield Advanced provides advanced DDoS protection for the NLB, making it the appropriate choice for protecting against large and
sophisticated DDoS attacks at the network layer.
C. AWS WAF is designed to provide protection at the application layer, making it suitable for securing the API Gateway against web exploits like
SQL injection.
A. AWS WAF is not compatible with NLB as it operates at the application layer, whereas NLB operates at the transport layer.
D. While GuardDuty helps detect threats, it does not directly protect against web exploits or DDoS attacks. Shield Standard focuses on edge
resources, not specifically NLBs.
E. Shield Standard provides basic DDoS protection for edge resources, but it does not directly protect the NLB or address web exploits at the
application layer.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: BC
B and C is correct
upvoted 1 times
Community vote distribution
BC (92%)
4%
2 months ago
Selected Answer: BC
NLB is a Lyer 3/4 component while WAF is a Layer 7 protection component.
That is why WAF is only available for Application Load Balancer in the ELB portfolio. NLB does not terminate the TLS session therefore WAF is not
capable of acting on the content. I would consider using AWS Shield at Layer 3/4.
https://repost.aws/questions/QU2fYXwSWUS0q9vZiWDoaEzA/nlb-need-to-attach-aws-waf
upvoted 3 times
2 months, 2 weeks ago
Selected Answer: C
• A. Use AWS WAF to protect the NLB.
INCORRECT, cos' WAF not integrate with network LB
• B. Use AWS Shield Advanced with the NLB.
YES. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running in AWS.
The doubt is : why apply the protection in the NLB when the facing of the app. is the API Gateway?, because Shield shoud be in front of the
communications, not behind.
Nevertheless, this is the best option.
• C. Use AWS WAF to protect Amazon API Gateway.
YES, https://aws.amazon.com/es/waf/faqs/
• D. Use Amazon GuardDuty with AWS Shield Standard
INCORRECT, GuardDuty not prevent attacks.
•E. Use AWS Shield Standard with Amazon API Gateway.
INCORRECT. It could be, in principle, a good option, cos' it's in front of the gateway, but the questions said explicity:
"wants to detect and mitigate large, sophisticated DDoS attacks",
and Standard not provide this feature.
upvoted 1 times
5 months ago
for those who select A, it is wrong, WAF is Layer 7, it only support ABL, APIGateway, CloudFront,COgnito User Pool and AppSync graphQL API
(https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html). NLB is NOT supported. Answer is BC
upvoted 4 times
5 months, 1 week ago
Selected Answer: AB
A and B are the best options to provide the greatest protection for the platform against web vulnerabilities and large, sophisticated DDoS attacks.
Option A: Use AWS WAF to protect the NLB. This will provide protection against common web vulnerabilities such as SQL injection.
Option B: Use AWS Shield Advanced with the NLB. This will provide additional protection against large and sophisticated DDoS attacks.
upvoted 2 times
1 month ago
correct
upvoted 1 times
5 months, 1 week ago
The best protection for the platform would be to use A and C together because it will protect both the NLB and the API Gateway from web
vulnerabilities and DDoS attacks.
upvoted 1 times
5 months, 1 week ago
A and C are the best options for protecting the platform against web vulnerabilities and detecting and mitigating large and sophisticated DDoS
attacks.
A: AWS WAF can be used to protect the NLB from web vulnerabilities such as SQL injection.
C: AWS WAF can be used to protect Amazon API Gateway and also provide protection against DDoS attacks.
B: AWS Shield Advanced is used to protect resources from DDoS attacks, but it is not specific to the NLB and may not provide the same level of
protection as using WAF specifically on the NLB.
D and E: Amazon GuardDuty and AWS Shield Standard are primarily used for threat detection and may not provide the same level of protection
as using WAF and Shield Advanced.
upvoted 1 times
6 months ago
Selected Answer: BC
WS Shield Advanced can help protect your Amazon EC2 instances and Network Load Balancers against infrastructure-layer Distributed Denial of
Service (DDoS) attacks. Enable AWS Shield Advanced on an AWS Elastic IP address and attach the address to an internet-facing EC2 instance or
Network Load Balancer.https://aws.amazon.com/blogs/security/tag/network-load-balancers/
upvoted 2 times
6 months, 1 week ago
Regional resources
You can protect regional resources in all Regions where AWS WAF is available. You can see the list at AWS WAF endpoints and quotas in the
Amazon Web Services General Reference.
You can use AWS WAF to protect the following regional resource types:
Amazon API Gateway REST API
Application Load Balancer
AWS AppSync GraphQL API
Amazon Cognito user pool
You can only associate a web ACL to an Application Load Balancer that's within AWS Regions. For example, you cannot associate a web ACL to an
Application Load Balancer that's on AWS Outposts.
upvoted 1 times
6 months, 1 week ago
Ans:-a and C
upvoted 1 times
6 months, 1 week ago
Selected Answer: AC
***CORRECT***
A. Use AWS WAF to protect the NLB.
C. Use AWS WAF to protect Amazon API Gateway.
AWS WAF is a web application firewall that helps protect web applications from common web exploits such as SQL injection and cross-site
scripting attacks. By using AWS WAF to protect the NLB and Amazon API Gateway, the company can provide an additional layer of protection for
its cloud communications platform against these types of web exploits.
upvoted 1 times
6 months ago
Your answer is wrong.
Sophisticated DDOS = Shield Advanced (DD0S attacks the front!) What happens if your load balances goes down?
Your API gateway is on the BACK further behind the NLB. SQL Protect that with the WAF
B and C are right.
upvoted 3 times
5 months ago
This guy just copies and pastes from ChatGPT.
upvoted 4 times
6 months, 1 week ago
About AWS Shield Advanced and Amazon GuardDuty
AWS Shield Advanced is a managed DDoS protection service that provides additional protection for Amazon EC2 instances, Amazon RDS DB
instances, Amazon Elastic Load Balancers, and Amazon CloudFront distributions. It can help detect and mitigate large, sophisticated DDoS
attacks, "but it does not provide protection against web exploits like SQL injection."
Amazon GuardDuty is a threat detection service that uses machine learning and other techniques to identify potentially malicious activity in
your AWS accounts. It can be used in conjunction with AWS Shield Standard, which provides basic DDoS protection for Amazon EC2 instances,
Amazon RDS DB instances, and Amazon Elastic Load Balancers. However, neither Amazon GuardDuty nor AWS Shield Standard provides
protection against web exploits like SQL injection.
Overall, the combination of using AWS WAF to protect the NLB and Amazon API Gateway provides the most protection against web exploits
and large, sophisticated DDoS attacks.
upvoted 1 times
6 months, 1 week ago
Option B and C
upvoted 1 times
6 months, 1 week ago
Selected Answer: BC
B and C
upvoted 1 times
6 months, 2 weeks ago
B & C is the answer
upvoted 1 times
7 months, 1 week ago
B and C
upvoted 1 times
7 months, 1 week ago
B and C
"AWS Shield Advanced" for "sophisticated DDoS attacks"
"AWS WAF" for "NLB
upvoted 4 times
7 months, 2 weeks ago
B and C
upvoted 1 times
Topic 1
Question #181
A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results
does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased
demand is to increase the size of the instances.
The company’s developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service
(Amazon ECS).
What should a solutions architect recommend for communication between the microservices?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to
the data consumers to process data from the queue.
B. Create an Amazon Simple Noti cation Service (Amazon SNS) topic. Add code to the data producers, and publish noti cations to the topic.
Add code to the data consumers to subscribe to the topic.
C. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add
code to the data consumers to receive a data object that is passed from the Lambda function.
D. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to
the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
Correct Answer:
A
Highly Voted
6 months, 1 week ago
Selected Answer: A
Option B, using Amazon Simple Notification Service (SNS), would not be suitable for this use case, as SNS is a pub/sub messaging service that is
designed for one-to-many communication, rather than point-to-point communication between specific microservices.
Option C, using an AWS Lambda function to pass messages, would not be suitable for this use case, as it would require the data producers and
data consumers to have a direct connection and invoke the Lambda function, rather than being decoupled through a message queue.
Option D, using an Amazon DynamoDB table with DynamoDB Streams, would not be suitable for this use case, as it would require the data
consumers to continuously poll the DynamoDB Streams API to detect new table entries, rather than being notified of new data through a message
queue.
upvoted 10 times
6 months, 1 week ago
Hence, Option A is the correct answer.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the
data consumers to process data from the queue.
upvoted 2 times
Most Recent
2 days, 14 hours ago
Selected Answer: A
A. Creating an Amazon SQS queue allows for asynchronous communication between microservices, decoupling the data producers and consumers.
It provides scalability, flexibility, and ensures that data processing can happen independently and at a desired pace.
B. Amazon SNS is more suitable for pub/sub messaging, where multiple subscribers receive the same message. It may not be the best fit for
sequential data processing.
C. Using AWS Lambda functions for communication introduces unnecessary complexity and may not be the optimal solution for sequential data
processing.
D. Amazon DynamoDB with DynamoDB Streams is primarily designed for real-time data streaming and change capture scenarios. It may not be the
most efficient choice for sequential data processing in a microservices architecture.
upvoted 1 times
1 month ago
BBBBBBBBB
upvoted 1 times
1 month ago
Community vote distribution
A (85%)
B (15%)
Selected Answer: A
SQS for decoupling a monolithic architecture, hence option A is the right answer.
upvoted 1 times
2 months, 3 weeks ago
it also says 'the order of results does not matter'. Option B is correct.
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
The answer is A.
B is wrong because SNS cannot send events "directly" to ECS.
https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html
upvoted 1 times
4 months ago
Selected Answer: B
it deosn;t say it is one-one relationships , SNS is better
upvoted 3 times
1 week, 4 days ago
watch out for this sentence in the question..."Data needs to process sequentially...."
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Best answer is A.
Though C or D is possible it requires additional components and integration and so they are not efficient. Assuming that rate of incoming requests
is within limits that SQS can handle A is best option.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
6 months, 2 weeks ago
answer is B.
An Amazon Simple Notification Service (Amazon SNS) topic can be used for communication between the microservices in this scenario. The data
producers can be configured to publish notifications to the topic, and the data consumers can be configured to subscribe to the topic and receive
notifications as they are published. This allows for asynchronous communication between the microservices, Question here focus on
communication between microservices
upvoted 2 times
7 months, 1 week ago
We need decoupling so ok to use SQS
upvoted 2 times
7 months, 1 week ago
Can someone explain it bit more? Not able to understand it.
upvoted 2 times
7 months, 1 week ago
As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architectural
style by means of decoupling. Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to
decouple and scale microservices, distributed systems, and serverless applications
upvoted 14 times
7 months, 1 week ago
Selected Answer: A
Answer is A
upvoted 2 times
7 months, 2 weeks ago
SQS to decouple.
upvoted 2 times
Topic 1
Question #182
A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that
signi cantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that
minimizes data loss and stores every transaction on at least two nodes.
Which solution meets these requirements?
A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to
an Amazon RDS MySQL DB instance.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
Selected Answer: B
Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data
Standby DB in Multi-AZ- synchronous replication
Read Replica always asynchronous. so option C is ignored.
upvoted 13 times
Most Recent
2 days, 12 hours ago
Selected Answer: B
B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
Enabling Multi-AZ functionality in Amazon RDS ensures synchronous replication of data to a standby replica in a different Availability Zone. This
provides high availability and minimizes data loss in the event of a database outage.
A. Creating an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones would provide even higher
availability but is not necessary for the stated requirements.
C. Creating a read replica in a separate AWS Region would provide disaster recovery capabilities but does not ensure synchronous replication or
meet the requirement of storing every transaction on at least two nodes.
D. Using an EC2 instance with a MySQL engine and triggering an AWS Lambda function for replication introduces unnecessary complexity and is
not the most suitable solution for ensuring reliable and synchronous replication.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
RDS Multi-AZ = Synchronous = Disaster Recovery (DR)
Read Replica = Asynchronous = High Availability
upvoted 3 times
2 months, 2 weeks ago
Selected Answer: B
B
since all other answers r wrong
upvoted 1 times
3 months ago
Selected Answer: B
B
Since read replica is async.
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
Multi AZ is not as protected as Multi-Region Read Replica.
upvoted 1 times
Community vote distribution
B (95%)
5%
5 months, 3 weeks ago
I curios to know why A isn't right. Is it just that it would take more effort?
upvoted 3 times
6 months ago
B is correct C requires more wokr.
upvoted 1 times
6 months, 1 week ago
Option B
upvoted 1 times
6 months, 1 week ago
Multi-AZ will give at least two nodes as required by the question. The answer is B.
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments with a single standby DB instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
6 months, 2 weeks ago
Option A is the correct answer in this scenario because it meets the requirements specified in the question. It creates an Amazon RDS DB instance
with synchronous replication to three nodes in three Availability Zones, which will provide high availability and durability for the database, ensuring
that the data is stored on multiple nodes and automatically replicated across Availability Zones.
Option B is not a correct answer because it creates an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, which only provides
failover capabilities. It does not enable synchronous replication to multiple nodes, which is required in this scenario.
upvoted 2 times
5 months, 3 weeks ago
Option B is not incorrect: "The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data
redundancy and minimize latency spikes during system backups" from
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 1 times
6 months, 1 week ago
I would go with Option B since it meets the company's requirements and is the most suitable solution.
By creating an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled, the solutions architect will ensure that data is
automatically synchronously replicated across multiple AZs within the same Region. This provides high availability and data durability,
minimizing the risk of data loss and ensuring that every transaction is stored on at least two nodes.
upvoted 1 times
6 months, 2 weeks ago
Maybe C since Amazon RDC now supports cross region read replica https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-rds-sql-
server-cross-region-read-replica/
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: B
Option B is the correct answer:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 1 times
7 months, 2 weeks ago
B is the answer
upvoted 2 times
Topic 1
Question #183
A company is building a new dynamic ordering website. The company wants to minimize server maintenance and patching. The website must be
highly available and must scale read and write capacity as quickly as possible to meet changes in user demand.
Which solution will meet these requirements?
A. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon DynamoDB with
on-demand capacity for the database. Con gure Amazon CloudFront to deliver the website content.
B. Host static content in Amazon S3. Host dynamic content by using Amazon API Gateway and AWS Lambda. Use Amazon Aurora with Aurora
Auto Scaling for the database. Con gure Amazon CloudFront to deliver the website content.
C. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute tra c. Use Amazon DynamoDB with provisioned write capacity for the database.
D. Host all the website content on Amazon EC2 instances. Create an Auto Scaling group to scale the EC2 instances. Use an Application Load
Balancer to distribute tra c. Use Amazon Aurora with Aurora Auto Scaling for the database.
Correct Answer:
A
Highly Voted
7 months, 1 week ago
Selected Answer: A
A - is correct, because Dynamodb on-demand scales write and read capacity
B - Aurora auto scaling scales only read replicas
upvoted 29 times
3 months ago
That’s not correct. Amazon Aurora with Aurora Auto Scaling can scale both read and write replicas. Is there anything else you would like me to
help you with?
upvoted 3 times
1 week ago
That's why Dynamo DB is best suited option
upvoted 1 times
1 week ago
Correct...Both can serve purpose but note the keyword "must scale read and write capacity as quickly as possible to meet changes in user
demand". DynamoDB can scale quickly than Aurora. Remember "PUSH BUTTON SCALING FEATURE" of Dynamo DB.
upvoted 1 times
Highly Voted
7 months, 1 week ago
please is this dump enough to pass the exam?
upvoted 9 times
4 months, 3 weeks ago
You can tell us now ? Going by the date of your post I guess you would have challenged the exam by now ? so how did it go ?
upvoted 5 times
7 months, 1 week ago
I HOPE SO
upvoted 8 times
Most Recent
2 days, 12 hours ago
Selected Answer: A
B. This solution leverages serverless technologies like API Gateway and Lambda for hosting dynamic content, reducing server maintenance and
patching. Aurora with Aurora Auto Scaling provides a highly available and scalable database solution. Hosting static content in S3 and configuring
CloudFront for content delivery ensures high availability and efficient scaling.
A. Using DynamoDB with on-demand capacity may provide scalability, but it does not offer the same level of flexibility and performance as Aurora.
Additionally, it does not address the hosting of dynamic content using serverless technologies.
C. Hosting all the website content on EC2 instances requires server maintenance and patching. While using ASG and an ALB helps with availability
and scalability, it does not minimize server maintenance as requested.
Community vote distribution
A (91%)
9%
D. Hosting all the website content on EC2 instances introduces server maintenance and patching. Using Aurora with Aurora Auto Scaling is a good
choice for the database, but it does not address the need to minimize server maintenance and patching for the overall infrastructure.
upvoted 1 times
4 weeks ago
B isn't correct because of cooldown
You can tune the responsiveness of a target-tracking scaling policy by adding cooldown periods that affect scaling your Aurora DB cluster in and
out. A cooldown period blocks subsequent scale-in or scale-out requests until the period expires. These blocks slow the deletions of Aurora
Replicas in your Aurora DB cluster for scale-in requests, and the creation of Aurora Replicas for scale-out requests.
upvoted 1 times
4 weeks ago
Key word in question "storing ordering data"
DynamoDB is perfect for storing ordering data (key-values)
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: A
Minimize maintenance & Patching = Serverless
S3, DynamoDB are serverless
upvoted 1 times
1 month, 2 weeks ago
The company wants to minimize server maintenance and patching -> Serverless (minimize)
C,D are wrong because these are not serverless
B is wrong because RDS is not serverless
-> A full serverless
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
The correct answer is B.
The option A would also meet the company's requirements of minimizing server maintenance and patching, and providing high availability and
quick scaling for read and write capacity. However, there are a few reasons why option B is a more optimal solution:
In option A, it uses Amazon DynamoDB with on-demand capacity for the database, which may not provide the same level of scalability and
performance as using Amazon Aurora with Aurora Auto Scaling.
Amazon Aurora offers additional features such as automatic failover, read replicas, and backups that makes it a more robust and resilient option
than DynamoDB. Additionally, the auto scaling feature is better suited to handle the changes in user demand.
Additionally, option B provides a more cost-effective solution, as Amazon Aurora can be more cost-effective for high read and write workloads
than Amazon DynamoDB, and also it's providing more features.
upvoted 2 times
5 months ago
The answer is A.
Key phrase in the Question is must scale read and write capacity. Aurora is only for Read.
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables:
On-demand
Provisioned (default, free-tier eligible)
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: A
A for sure ~
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
A. Looking for serverless to reduce maintenance requirements
upvoted 2 times
6 months, 2 weeks ago
A
Amazon DynamoDB with on-demand capacity for the database. This solution allows the website to automatically scale to meet changes in user
demand and minimize the need for server maintenance and patching. B is not a correct answer because it uses Amazon Aurora with Aurora Auto
Scaling for the database(While Amazon Aurora is a highly available and scalable database solution); however, it is not a suitable choice for this
scenario because it requires server maintenance and patching.
upvoted 1 times
5 months, 3 weeks ago
Right answer but wrong reason. B is not suitable because the requirements are "must scale read and write" but Aurora replication is using
single-master replication, i.e. Read Replication.
upvoted 2 times
7 months, 1 week ago
Selected Answer: A
On-demand mode is a good option if any of the following are true:
You create new tables with unknown workloads.
You have unpredictable application traffic.
You prefer the ease of paying for only what you use.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 2 times
7 months, 1 week ago
A is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer A
"Read write capacity = DynamoDb" Read Replica mostly Aurora .. @nhlegend yes DynampDB has 400KB maximum but in the answer neither
Dynamo or Aurora was used as primary storage
upvoted 4 times
7 months, 2 weeks ago
Selected Answer: A
Agree with A, DynamoDB is perfect for storing ordering data (key-values)
upvoted 5 times
7 months, 2 weeks ago
A is the answer
upvoted 2 times
Topic 1
Question #184
A company has an AWS account used for software engineering. The AWS account has access to the company’s on-premises data center through a
pair of AWS Direct Connect connections. All non-VPC tra c routes to the virtual private gateway.
A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access
a database that runs in a private subnet in the company’s data center.
Which solution will meet these requirements?
A. Con gure the Lambda function to run in the VPC with the appropriate security group.
B. Set up a VPN connection from AWS to the data center. Route the tra c from the Lambda function through the VPN.
C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.
D. Create an Elastic IP address. Con gure the Lambda function to send tra c through the Elastic IP address without an elastic network
interface.
Correct Answer:
C
Highly Voted
6 months, 4 weeks ago
Selected Answer: A
To configure a VPC for an existing function:
1. Open the Functions page of the Lambda console.
2. Choose a function.
3. Choose Configuration and then choose VPC.
4. Under VPC, choose Edit.
5. Choose a VPC, subnets, and security groups. <-- **That's why I believe the answer is A**.
Note:
If your function needs internet access, use network address translation (NAT). Connecting a function to a public subnet doesn't give it internet
access or a public IP address.
upvoted 9 times
1 week, 4 days ago
The question says on-prem database...how do we create a SG for that instance in AWS? C make sense. my 2 cents..
upvoted 1 times
Highly Voted
6 months, 3 weeks ago
Selected Answer: A
it is A. C is not correct at all as in the question it metions that the VPC already has connectivity with on-premises
upvoted 8 times
5 months, 1 week ago
C says to "update the route table" not create a new connection. C is correct.
upvoted 2 times
3 weeks, 2 days ago
C is wrong. Lambda can't connect by default to resources in a private VPC, so you have to do some specific setup steps to run in a private
VPC, Answer A is correct
upvoted 1 times
2 months, 1 week ago
No need to do route updates. This is because the route to the destination on-premises is already set.
upvoted 2 times
Most Recent
2 days, 12 hours ago
Selected Answer: A
Option A: Configure the Lambda function to run in the VPC with the appropriate security group. This allows the Lambda function to access the
database in the private subnet of the company's data center. By running the Lambda function in the VPC, it can communicate with resources in the
private subnet securely.
Option B is incorrect because setting up a VPN connection and routing the traffic from the Lambda function through the VPN would add
unnecessary complexity and overhead.
Community vote distribution
A (79%)
C (21%)
Option C is incorrect because updating the route tables in the VPC to allow access to the on-premises data center through Direct Connect would
affect the entire VPC's routing, potentially exposing other resources to the on-premises network.
Option D is incorrect because creating an Elastic IP address and sending traffic through it without an elastic network interface is not a valid
configuration for accessing resources in a private subnet.
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: C
My answer is C. Refer to the steps in the link. need to configure the routing table to route traffic to the destination.
https://aws.amazon.com/blogs/compute/running-aws-lambda-functions-on-aws-outposts-using-aws-iot-greengrass/
A is wrong as it says configure the lambda function in the VPC. the requirement to run in the database that is on-premise.
upvoted 2 times
2 months ago
Selected Answer: A
once you have configured your Lambda to be deployed (or connected) to your VPC [1], as long as your VPC has connectivity to your data center, it
will be allowed to route the traffic towards it - whether it uses Direct Connect or other connections, like VPN.
https://repost.aws/questions/QUSaj1a6jBQ92Kp56klbZFNw/questions/QUSaj1a6jBQ92Kp56klbZFNw/aws-lambda-to-on-premise-via-direct-
connect-and-aws-privatelink?
upvoted 1 times
2 months, 1 week ago
C
AWS ->
회사
데이터
센터로
나가는
트래픽이기
때문에
upvoted 1 times
2 months, 1 week ago
english please
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
CORRECT ANSWER = A,
C = WRONG because in question, it is telling non VPN traffic is being sent through virtual private gateway(Direct Connect), meaning all routes are
looking towards on prem where out destination service is located. So no routing change will be needed.
When you create Lambda(Function) - > you need to choose VPN and than Security group inside VPC.
Link for better understanding :
https://www.youtube.com/watch?v=beV1AYyhgYA&ab_channel=DigitalCloudTraining
upvoted 3 times
2 months, 3 weeks ago
it is telling non "VPC" traffic, really wish there was edit function lol
upvoted 1 times
3 months, 1 week ago
In my opinion this question is flawed. Non of the answers makes any sense to me. However, if I have to choose one I will choose C. There is no
option of associating Security Group with Lambda function.
upvoted 2 times
4 months, 1 week ago
Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-managing-eni
upvoted 2 times
4 months, 1 week ago
Selected Answer: A
The best solution to meet the requirements would be option A - Configure the Lambda function to run in the VPC with the appropriate security
group.
By configuring the Lambda function to run in the VPC, the function will have access to the private subnets in the company's data center through
the Direct Connect connections. Additionally, security groups can be used to control inbound and outbound traffic to and from the Lambda
function, ensuring that only the necessary traffic is allowed.
upvoted 2 times
4 months, 1 week ago
Option B is not ideal as it would require additional configuration and management of a VPN connection between the company's data center
and AWS, which may not be necessary for the specific use case.
Option C is not recommended as updating the route tables to allow the Lambda function to access the on-premises data center through Direct
Connect would allow all VPC traffic to route through the data center, which may not be desirable and could potentially create security risks.
Option D is not a viable solution for accessing resources in the on-premises data center as Elastic IP addresses are only used for outbound
internet traffic from an Amazon VPC, and cannot be used to communicate with resources in an on-premises data center.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: A
"All non-VPC traffic routes to the virtual private gateway." means -> there are already the appropriate routes, so no need for update the route
tables.
Key phrase: "database that runs in a private subnet in the company's data center.", means: You need the appropriate security group to access the
DB.
upvoted 3 times
5 months, 1 week ago
Selected Answer: A
A makes more sense to me.
upvoted 1 times
5 months, 3 weeks ago
A = Answer.
Note that " All non-VPC traffic routes to the virtual gateway" meaning if traffic not meant for the VPC, it routes to on-prem (C answer invalid). For
the Lambda function to access the on-prem database you have to configure the Lambda function in the VPC and use appropriate SG outbound.
Phew! did some research on this, was a bit confused with C.
upvoted 5 times
4 months, 2 weeks ago
Yes Lambda is not connected to an Amazon VPC. so Answer A
upvoted 1 times
6 months ago
Selected Answer: C
it is C only
upvoted 2 times
6 months, 1 week ago
Selected Answer: C
To allow an AWS Lambda function to access a database in a private subnet in the company's data center, the correct solution is to update the route
tables in the Virtual Private Cloud (VPC) to allow the Lambda function to access the on-premises data center through the AWS Direct Connect
connections.
Option C, updating the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect, is the
correct solution to meet the requirements.
upvoted 2 times
5 months, 3 weeks ago
Sorry, but like a lot of your responses in this group, your answers are incorrect. I really think you need to study more, unless you are deliberately
trying to confuse people. "All non-VPC traffic routes to the virtual private gateway" means that C is not necessary.
upvoted 6 times
4 months, 4 weeks ago
Have noticed the Buru----tuy guy/girl likes giving incorrect answers.
upvoted 2 times
4 months, 2 weeks ago
Most likely Buru----tuy is getting responses from ChatGPT, which is not always right.
upvoted 5 times
6 months, 1 week ago
Option A, configuring the Lambda function to run in the VPC with the appropriate security group, is not the correct solution because it does not
allow the Lambda function to access the database in the private subnet in the data center.
Option B, setting up a VPN connection from AWS to the data center and routing the traffic from the Lambda function through the VPN, is not
the correct solution because it would not be the most efficient solution, as the traffic would need to be routed over the public internet,
potentially increasing latency.
Option D, creating an Elastic IP address and configuring the Lambda function to send traffic through the Elastic IP address without an elastic
network interface, is not a valid solution because Elastic IP addresses are used to assign a static public IP address to an instance or network
interface, and do not provide a direct connection to an on-premises data center.
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 1 times
7 months ago
Selected Answer: A
When you connect a function to a VPC, Lambda assigns your function to a Hyperplane ENI (elastic network interface) for each subnet in your
function's VPC configuration. Lambda creates a Hyperplane ENI the first time a unique subnet and security group combination is defined for a VPC-
enabled function in an account.
upvoted 2 times
Topic 1
Question #185
A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API
calls to store the resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon S3?
A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task de nition.
C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch con guration used by the ECS cluster.
D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
Correct Answer:
B
Highly Voted
6 months, 1 week ago
Selected Answer: B
To ensure that an Amazon Elastic Container Service (ECS) application has permission to access Amazon Simple Storage Service (S3), the correct
solution is to create an AWS Identity and Access Management (IAM) role with the necessary S3 permissions and specify that role as the taskRoleArn
in the task definition for the ECS application.
Option B, creating an IAM role with S3 permissions and specifying that role as the taskRoleArn in the task definition, is the correct solution to meet
the requirement.
upvoted 5 times
6 months, 1 week ago
Option A, updating the S3 role in IAM to allow read/write access from ECS and relaunching the container, is not the correct solution because the
S3 role is not associated with the ECS application.
Option C, creating a security group that allows access from ECS to S3 and updating the launch configuration used by the ECS cluster, is not the
correct solution because security groups are used to control inbound and outbound traffic to resources, and do not grant permissions to access
resources.
Option D, creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster while logged in as this account, is not
the correct solution because it is generally considered best practice to use IAM roles rather than IAM users to grant permissions to resources.
upvoted 2 times
Most Recent
2 days, 12 hours ago
Selected Answer: B
Option B: Create an IAM role with S3 permissions and specify that role as the taskRoleArn in the task definition. This approach allows the ECS task
to assume the specified role and gain the necessary permissions to access Amazon S3.
Option A is incorrect because updating the S3 role in IAM and relaunching the container does not associate the updated role with the ECS task.
Option C is incorrect because creating a security group that allows access from Amazon ECS to Amazon S3 does not grant the necessary
permissions to the ECS task.
Option D is incorrect because creating an IAM user with S3 permissions and relaunching the EC2 instances for the ECS cluster does not associate
the IAM user with the ECS task.
upvoted 1 times
4 weeks ago
https://repost.aws/knowledge-center/ecs-fargate-access-aws-services
upvoted 1 times
6 months ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/27954-exam-aws-certified-solutions-architect-associate-saa-c02/
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
upvoted 1 times
6 months ago
Selected Answer: B
The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task
permission to call AWS APIs on your behalf.
Community vote distribution
B (100%)
upvoted 1 times
6 months, 1 week ago
Option B
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: B
Agreed
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
B is the best answer
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
Selected Answer: B
The answer is B.
upvoted 1 times
7 months, 2 weeks ago
B is the answer
upvoted 2 times
Topic 1
Question #186
A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows le system
attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zone:
What should a solutions architect do to meet this requirement?
A. Con gure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.
B. Con gure Amazon FSx for Windows File Server. Mount the Amazon FSx le system to each Windows instance.
C. Con gure a le system by using Amazon Elastic File System (Amazon EFS). Mount the EFS le system to each Windows instance.
D. Con gure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the
le system within the volume to each Windows instance.
Correct Answer:
B
Highly Voted
7 months, 2 weeks ago
Correct is B
FSx --> shared Windows file system
(
SMB
)
EFS --> Linux NFS
upvoted 6 times
Most Recent
2 days, 12 hours ago
Selected Answer: B
Option B: Configure Amazon FSx for Windows File Server. This service provides a fully managed Windows file system that can be easily shared
across multiple EC2 Windows instances. It offers high performance and supports Windows applications that require file storage.
Option A is incorrect because AWS Storage Gateway in volume gateway mode is not designed for shared file systems.
Option C is incorrect because while Amazon EFS can be mounted to multiple instances, it is a Linux-based file system and may not be suitable for
Windows applications.
Option D is incorrect because attaching and mounting an Amazon EBS volume to multiple instances simultaneously is not supported.
upvoted 1 times
1 month ago
Selected Answer: B
Option B is right answer.
upvoted 1 times
6 months ago
Selected Answer: B
References :
https://www.examtopics.com/discussions/amazon/view/28006-exam-aws-certified-solutions-architect-associate-saa-c02/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/wfsx-volumes.html
upvoted 1 times
6 months ago
Selected Answer: B
EFS is not compatible with Windows.
https://pilotcoresystems.com/insights/ebs-efs-fsx-s3-how-these-storage-options-
differ/#:~:text=EFS%20works%20with%20Linux%20and,with%20all%20Window%20Server%20platforms.
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
A. Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.
This option is incorrect because AWS Storage Gateway is not a file storage service. It is a hybrid storage service that allows you to store data in the
cloud while maintaining low-latency access to frequently accessed data. It is designed to integrate with on-premises storage systems, not to
provide file storage for Amazon EC2 instances.
B. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
Community vote distribution
B (100%)
This is the correct answer. Amazon FSx for Windows File Server is a fully managed file storage service that provides a native Windows file system
that can be accessed over the SMB protocol. It is specifically designed for use with Windows-based applications, and it can be easily integrated
with existing applications by mounting the file system to each EC2 instance.
upvoted 3 times
6 months, 1 week ago
C. Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.
This option is incorrect because Amazon EFS is a file storage service that is designed for use with Linux-based applications. It is not compatible
with Windows-based applications, and it cannot be accessed over the SMB protocol.
D. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file
system within the volume to each Windows instance.
This option is incorrect because Amazon EBS is a block storage service, not a file storage service. It is designed for storing raw block-level data
that can be accessed by a single EC2 instance at a time. It is not designed for use as a shared file system that can be accessed by multiple
instances.
upvoted 1 times
6 months, 1 week ago
B - is correct
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B
upvoted 1 times
7 months, 1 week ago
B is correct
upvoted 1 times
7 months, 1 week ago
B FSx for windows
upvoted 1 times
7 months, 1 week ago
B is correct option
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: B
Amazon FSx for Windows File Server
upvoted 3 times
Topic 1
Question #187
A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational
database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.
Which solutions meet these requirements? (Choose two.)
A. Create an Amazon RDS DB instance in Multi-AZ mode.
B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.
C. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
Correct Answer:
AD
Highly Voted
6 months ago
Selected Answer: AD
https://containersonaws.com/introduction/ec2-or-aws-fargate/
A.(O) multi-az <= 'little intervention'
B.(X) read replica <= Promoting a read replica to be a standalone DB instance
You can promote a read replica into a standalone DB instance. When you promote a read replica, the DB instance is rebooted before it becomes
available.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
C.(X) use Amazon ECS instead of EC2-based docker for little human intervention
D.(O) Amazon ECS on AWS Fargate : AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage
servers or clusters of Amazon EC2 instances.
E.(X) EC2 launch type
The EC2 launch type can be used to run your containerized applications on Amazon EC2 instances that you register to your Amazon ECS cluster
and manage yourself.
upvoted 10 times
Most Recent
2 days, 12 hours ago
Selected Answer: AD
A. Create an Amazon RDS DB instance in Multi-AZ mode. This ensures that the database is highly available with automatic failover to a standby
replica in another Availability Zone.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load. Fargate
abstracts the underlying infrastructure, automatically scaling and managing the containers, making it a highly available and low-maintenance
option.
Option B is not the best choice as it only creates replicas in another Availability Zone without the automatic failover capability provided by Multi-
AZ mode.
Option C is not the best choice as managing a Docker cluster on EC2 instances requires more manual intervention compared to using the
serverless capabilities of Fargate in option D.
Option E is not the best choice as it uses the EC2 launch type, which requires managing and scaling the EC2 instances manually. Fargate, as
mentioned in option D, provides a more automated and scalable solution.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: AD
little manual intervention = Serverless
upvoted 1 times
6 months, 1 week ago
Selected Answer: AD
Option A&D
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: AD
A and D
upvoted 1 times
Community vote distribution
AD (100%)
7 months ago
Selected Answer: AD
A and D
upvoted 1 times
7 months ago
A and D
upvoted 1 times
7 months, 1 week ago
A and D are the options
upvoted 1 times
7 months, 2 weeks ago
AD for sure
Link: https://www.examtopics.com/discussions/amazon/view/43729-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
Topic 1
Question #188
A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data les. A solutions architect needs
to implement a highly available SFTP solution that minimizes operational overhead.
Which solution will meet these requirements?
A. Use AWS Transfer Family to con gure an SFTP-enabled server with a publicly accessible endpoint. Choose the S3 data lake as the
destination.
B. Use Amazon S3 File Gateway as an SFTP server. Expose the S3 File Gateway endpoint URL to the new partner. Share the S3 File Gateway
endpoint with the new partner.
C. Launch an Amazon EC2 instance in a private subnet in a VPInstruct the new partner to upload les to the EC2 instance by using a VPN. Run
a cron job script, on the EC2 instance to upload les to the S3 data lake.
D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an
SFTP listener port for the NLB. Share the NLB hostname with the new partner. Run a cron job script on the EC2 instances to upload les to the
S3 data lake.
Correct Answer:
D
Highly Voted
6 months ago
Answer is A
AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services using SFTP, FTPS, FTP, and AS2
protocols.
https://aws.amazon.com/aws-transfer-family/
upvoted 9 times
Most Recent
2 days, 12 hours ago
This solution provides a highly available SFTP solution without the need for manual management or operational overhead. AWS Transfer Family
allows you to easily set up an SFTP server with authentication, authorization, and integration with S3 as the storage backend.
Option B is not the best choice as it suggests using Amazon S3 File Gateway, which is primarily used for file-based access to S3 storage over NFS or
SMB protocols, not for SFTP access.
Option C is not the best choice as it requires manual management of an EC2 instance, VPN setup, and cron job script for uploading files,
introducing operational overhead and potential complexity.
Option D is not the best choice as it also requires manual management of EC2 instances, Network Load Balancer, and cron job scripts for file
uploads. It is more complex and involves additional components compared to the simpler and fully managed solution provided by AWS Transfer
Family in option A.
upvoted 1 times
2 days, 12 hours ago
A is correct
upvoted 1 times
1 week, 4 days ago
I can't wrap my head around why the answer is D? this is so frustrating to see where i went wrong. I vote for A.
upvoted 1 times
1 month ago
For Exam :
Whenever you see SFTP , FTP look for "Transfer" in options available
upvoted 4 times
1 month, 2 weeks ago
Selected Answer: A
minimizes operational overhead = Serverless
AWS Transfer Family is serverless
upvoted 1 times
1 month, 3 weeks ago
AWS Transfer Family is compatible for SFTP<FTPS<FTP. A is the answer
upvoted 1 times
Community vote distribution
A (100%)
2 months ago
Selected Answer: A
AWS Transfer Family is a fully managed AWS service that you can use to transfer files into and out of Amazon Simple Storage Service (Amazon S3)
storage or Amazon Elastic File System (Amazon EFS) file systems over the following protocols:
Secure Shell (SSH) File Transfer Protocol (SFTP): version 3
File Transfer Protocol Secure (FTPS)
File Transfer Protocol (FTP)
Applicability Statement 2 (AS2)
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: A
A - is the correct answer.
upvoted 2 times
6 months, 1 week ago
A -- is the option
upvoted 3 times
6 months, 1 week ago
Selected Answer: A
Option A
upvoted 3 times
7 months ago
Selected Answer: A
AWS Transfer Family - SFTP
upvoted 2 times
7 months, 1 week ago
Selected Answer: A
AAAAAAAA
AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server
with one or more Amazon Simple Storage Service (Amazon S3) buckets
upvoted 2 times
7 months, 1 week ago
AAAAAAAA
AWS Transfer for SFTP, a fully-managed, highly-available SFTP service. You simply create a server, set up user accounts, and associate the server
with one or more Amazon Simple Storage Service (Amazon S3) buckets.
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
A is the answer - https://docs.aws.amazon.com/transfer/latest/userguide/create-server-sftp.html
upvoted 2 times
7 months, 2 weeks ago
A is the answer
upvoted 1 times
7 months, 2 weeks ago
Selected Answer: A
answer is A
upvoted 2 times
7 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/83197-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Topic 1
Question #189
A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the
documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically
every year.
Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Choose two.)
A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Con gure key rotation.
D. Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Con gure key rotation.
E. Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Con gure key rotation.
Correct Answer:
CE
Highly Voted
7 months ago
Selected Answer: BD
Originally answered B and C due to least operational overhead. after research its bugging me that the s3 key rotation is determined based on AWS
master Key rotation which cannot guarantee the key is rotated with in a 365 day period. stated as "varies" in the documentation. also its impossible
to configure this in the console.
KMS-C is a tick box in the console to turn on annual key rotation but requires more operational overhead than SSE-S3.
C - will not guarantee the questions objectives but requires little overhead.
D - will guarantee the questions objective with more overhead.
upvoted 18 times
6 months, 1 week ago
I‘d have to disagree on that. It states here that aws managed keys are rotated every year which is what the question asks:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html so C would be correct.
However, it also states that you cannot enable or disable rotation for aws managed keys which would again point towards D
upvoted 2 times
2 months, 2 weeks ago
You can't use this link
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
to said that "sse-s3" rotates every year, cos' preciselly that link refers to "KMS", that is covered with option D.
That the reason the solution is B+D.
upvoted 2 times
Highly Voted
7 months, 2 weeks ago
Selected Answer: BD
should be BD
C could have been fine, but key rotation is activate per default on SSE-S3, and no way to deactivate it if I am not wrong
upvoted 6 times
Most Recent
2 days, 12 hours ago
Selected Answer: BD
B. By using S3 Object Lock in compliance mode, it enforces a strict retention policy on the objects, preventing any modifications or deletions.
D. By using server-side encryption with AWS KMS customer managed keys, the documents are encrypted with a customer-controlled key. Enabling
key rotation ensures that a new encryption key is generated automatically at the defined rotation interval, enhancing security.
Option A: S3 Object Lock in governance mode does not provide the required immutability for the documents, allowing potential modifications or
deletions.
Option C: Server-side encryption with SSE-S3 alone does not fulfill the requirement of encryption key rotation, which is explicitly specified.
Option E: Server-side encryption with customer-provided (imported) keys (SSE-C) is not necessary when AWS KMS customer managed keys (Option
D) can be used, which provide a more integrated and manageable solution.
upvoted 1 times
1 month ago
Selected Answer: BD
Answer is BD. C is discarded because key rotation can't be configured by the customer
Community vote distribution
BD (76%)
BC (23%)
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: BD
With SSE-S3 you can NOT Configure key rotation (see the choice C last sentence)
With KMS you can configure key rotation
upvoted 1 times
1 month, 2 weeks ago
also, SSE-S3 is default and free. The question is not about cost, it is about operational maintenance
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BD
My answer is B and D.
I choose D over C cos of the annual key rotation requirement.
upvoted 1 times
2 months ago
Selected Answer: BD
Consider using the default aws/s3 KMS key if:
You're uploading or accessing S3 objects using AWS Identity and Access Management (IAM) principals that are in the same AWS account as the
AWS KMS key.
You don't want to manage policies for the KMS key.
Consider using a customer managed key if:
You want to create, rotate, disable, or define access controls for the key.
You want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed key to allow access from another
account.
https://repost.aws/knowledge-center/s3-object-encryption-keys
upvoted 1 times
2 months ago
Selected Answer: BD
BD
"You cannot enable or disable key rotation for AWS owned keys. The key rotation strategy for an AWS owned key is determined by the AWS service
that creates and manages the key."
This eliminates option c which says configure key rotation
upvoted 3 times
2 months ago
Selected Answer: BC
i choose C instead of D because this part of question "LEAST operational overhead"
AWS KMS automatically rotates AWS managed keys every year (approximately 365 days). You cannot enable or disable key rotation for AWS
managed keys
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times
3 months, 1 week ago
Selected Answer: BD
The answer is B and D
C is not correct. with SSe-S3 encryption, you do not have control over the key rotation.
upvoted 3 times
3 months, 2 weeks ago
Selected Answer: BD
C is wrong. see this:
https://stackoverflow.com/questions/63478626/which-aws-s3-encryption-technique-provides-rotation-policy-for-encryption-
keys#:~:text=This%20uses%20your%20own%20key,automatically%20rotated%20every%201%20year.
it said "SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is
shared among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined." .
So SSE-S3 does have key rotation, but user cannot configure rotation frequency. It vaires and managed by AWS, NOT by user.
upvoted 2 times
4 months ago
2 QUESTION ASK FORl - The company needs to encrypt the documents at rest and rotate the encryption keys automatically every year.
READ: https://docs.aws.amazon.com/kms/latest/developerguide/overview.html
ANSWER - D
upvoted 1 times
4 months ago
1. QUESTION ASK THE FOLLOWING: During the 5-year period, the company must ensure that the documents cannot be overwritten or deleted. ?
SEE: https://jayendrapatil.com/tag/s3-object-lock-in-governance-mode/
ANSWER: B
AM GOING RESEARCH ON SECOND PART OF QUESTION.
JESUS IS GOOD..
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: BD
C or D -> Trick question:
C is wrong because the keys are rotated automatically by the S3 service in (SSE-S3) option.
You are correct that the question says "rotate the encryption keys automatically every year."
But the Answer C says: "Configure key rotation" and that you can not do with (SSE-S3), because it rotates automatically ;)
upvoted 2 times
4 months, 3 weeks ago
Selected Answer: AD
compliance mode is unnecessary here.
upvoted 1 times
4 months, 2 weeks ago
the company must ensure that the documents cannot be overwritten or deleted.
This is the definition of compliance mode, it is absolutely needed here.
upvoted 7 times
4 months, 3 weeks ago
totally agree.
upvoted 1 times
5 months ago
Selected Answer: BD
Ans C mention - Configure Key rotation. but SSE-S3 does not have key rotation configuration.
upvoted 2 times
4 months, 3 weeks ago
it does not have that configuration because it is built in to it. A and C are correct
upvoted 1 times
5 months, 1 week ago
What part of the question required customer intervention of annual key rotation ? I don't get why automatic rotation is so difficult to grasp, SS3-S3
rotates the key automatically annually as the question required.
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 3 times
Topic 1
Question #190
A company has a web application that is based on Java and PHP. The company plans to move the application from on premises to AWS. The
company needs the ability to test new site features frequently. The company also needs a highly available and managed solution that requires
minimum operational overhead.
Which solution will meet these requirements?
A. Create an Amazon S3 bucket. Enable static web hosting on the S3 bucket. Upload the static content to the S3 bucket. Use AWS Lambda to
process all dynamic content.
B. Deploy the web application to an AWS Elastic Beanstalk environment. Use URL swapping to switch between multiple Elastic Beanstalk
environments for feature testing.
C. Deploy the web application to Amazon EC2 instances that are con gured with Java and PHP. Use Auto Scaling groups and an Application
Load Balancer to manage the website’s availability.
D. Containerize the web application. Deploy the web application to Amazon EC2 instances. Use the AWS Load Balancer Controller to
dynamically route tra c between containers that contain the new site features for testing.
Correct Answer:
D
Highly Voted
6 months, 2 weeks ago
B
Elastic Beanstalk is a fully managed service that makes it easy to deploy and run applications in the AWS; To enable frequent testing of new site
features, you can use URL swapping to switch between multiple Elastic Beanstalk environments.
upvoted 8 times
Most Recent
2 days, 12 hours ago
Selected Answer: B
B. Provides a highly available and managed solution with minimum operational overhead. By deploying the web application to EBS, the
infrastructure and platform management are abstracted, allowing easy deployment and scalability. With URL swapping, different environments can
be created for testing new site features, and traffic can be routed between these environments without any downtime.
A. Suggests using S3 for static content hosting and Lambda for dynamic content. While it offers simplicity for static content, it does not provide the
necessary flexibility and dynamic functionality required by a Java and PHP-based web application.
C. Involves manual management of EC2, ASG, and ELB, which requires more operational overhead and may not provide the desired level of
availability and ease of testing.
D. Introduces containerization, which adds complexity and operational overhead for managing containers and infrastructure, making it less suitable
for a requirement of minimum operational overhead.
upvoted 1 times
4 weeks ago
S3 is for hosting static websites not dynamic websites or applications
Beanstalk will take care of this.
upvoted 1 times
2 months ago
Selected Answer: B
Frequent feature testing -
- Multiple Elastic Beanstalk environments can be created easily for development, testing and production use cases.
- Traffic can be routed between environments for A/B testing and feature iteration using simple URL swapping techniques. No complex routing
rules or infrastructure changes required.
upvoted 1 times
2 months, 3 weeks ago
who needs discussion in the era the of chatGPT
upvoted 2 times
4 months, 2 weeks ago
Option B as it has the minimum operational overhead
upvoted 1 times
4 months, 2 weeks ago
Community vote distribution
B (88%)
13%
Selected Answer: B
Blue/Green deployments https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: B
is correct
upvoted 1 times
5 months, 3 weeks ago
As I was told, Elastic Beanstalk is an expensive service, isn't it?
upvoted 2 times
5 months, 3 weeks ago
so what? The question doesn’t require the most cost-effective solution
upvoted 8 times
6 months ago
Selected Answer: B
D includes additional overhead of installing.
upvoted 2 times
6 months, 1 week ago
B -- is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Option B as it has the minimum operational overhead
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
B looks correct
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
B is the correct. 100%. i have confirmation
upvoted 2 times
7 months ago
Answer B
upvoted 1 times
7 months ago
for containers, you need source image. Beanstalk is configurable runtime environment - you can choose stack (java, php, ..) and its version. Much
more easier to deploy and use compared to containers.
upvoted 2 times
7 months, 1 week ago
Selected Answer: D
wow, so many votes for B.
B will be correct if application requires one of runtime java or php, elastic Beanstallk allows to specify only one runtime. In requirement is "web
application that is based on Java and PHP"
so B is out.
D allows to setup own container and there you may install as many as system needs
upvoted 2 times
1 week, 4 days ago
Why would anyone create app with both java and php that itself is an operational overhead maintaining resources...anyway, "and" is the
keyword...in fast reading we typically miss that part and in our mind we are thinking oh well app with java or php. good catch. hope that is not a
typo. that one word changes the answer.
upvoted 1 times
6 months, 2 weeks ago
You can’t set up a containerized application on ec2.
upvoted 1 times
6 months, 3 weeks ago
You are right, Beanstalk allows Java or PHP, but not both. I think there could be an error in the question text, as it also mentions that it needs to
be a managed service and also able to test new features frequently, so url swapping is great for this. I would choose B
upvoted 2 times
7 months ago
D can also be done by Elastic Beanstalk. Answer is B, as it using beanstalk removes the overhead
AWS Elastic Beanstalk is the fastest way to get web applications up and running on AWS. You can simply upload your application code, and the
service automatically handles details such as resource provisioning, load balancing, auto scaling, and monitoring. Elastic Beanstalk is ideal if you
have a PHP, Java, Python, Ruby, Node.js, .NET, Go, or Docker web application. Elastic Beanstalk uses core AWS services such as Amazon Elastic
Compute Cloud (EC2), Amazon Elastic Container Service (ECS), AWS Auto Scaling, and Elastic Load Balancing (ELB) to easily support applications
that need to scale to serve millions of users.
upvoted 4 times
7 months ago
But Elastic Beanstalk configs only support one runtime at once, so you cannot automatically have Java and PHP, unless you go to EC2 directly
and install another runtime.
upvoted 1 times
6 months, 3 weeks ago
Don't get your point here... how can you justify Option D for a 'High Available' and 'managed' solution when you're containorizing your
apps and deploying your containers on EC2s w/o any Auto-scaling groups involved??...the need in the question is about removing the
overhead of managing different layers of computation involved.
upvoted 1 times
6 months, 1 week ago
Yeah, agree that D doesn't look as correct I had read EC2 as ECS first time, so ECS and containers are good fit.
I don't think it's D as well I don't think it's B, because by default ElasticBeanstalk doesn't allow to have PHP and JAVA in the same time.
upvoted 1 times
Topic 1
Question #191
A company has an ordering application that stores customer information in Amazon RDS for MySQL. During regular business hours, employees
run one-time queries for reporting purposes. Timeouts are occurring during order processing because the reporting queries are taking a long time
to run. The company needs to eliminate the timeouts without preventing employees from performing queries.
What should a solutions architect do to meet these requirements?
A. Create a read replica. Move reporting queries to the read replica.
B. Create a read replica. Distribute the ordering application to the primary DB instance and the read replica.
C. Migrate the ordering application to Amazon DynamoDB with on-demand capacity.
D. Schedule the reporting queries for non-peak hours.
Correct Answer:
B
Highly Voted
6 months, 1 week ago
A is correct answer. This was in my exam
upvoted 13 times
3 months, 1 week ago
Did these questions help with your exam?
upvoted 2 times
Most Recent
2 days, 12 hours ago
Selected Answer: A
A. By moving the reporting queries to the read replica, the primary DB instance used for order processing is not affected by the long-running
reporting queries. This helps eliminate timeouts during order processing while allowing employees to perform their queries without impacting the
application's performance.
B. While this can provide some level of load distribution, it does not specifically address the issue of timeouts caused by reporting queries during
order processing.
C. While DynamoDB offers scalability and performance benefits, it may require significant changes to the application's data model and querying
approach.
D. While this approach can help alleviate the impact on order processing, it does not address the requirement of eliminating timeouts without
preventing employees from performing queries.
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: A
correct
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: A
A is correct.
upvoted 1 times
2 months ago
Selected Answer: A
Creating a read replica allows the company to offload the reporting queries to a separate database instance, reducing the load on the primary
database used for order processing. By moving the reporting queries to the read replica, the ordering application running on the primary DB
instance can continue to process orders without timeouts due to the long-running reporting queries.
Option B is not a good solution because distributing the ordering application to the primary DB instance and the read replica does not address the
issue of long-running reporting queries causing timeouts during order processing.
upvoted 1 times
2 months, 1 week ago
Please DM contributor access: yi.liiiii520@gmail.com
upvoted 2 times
2 months ago
Community vote distribution
A (100%)
bro i need contibutor access please
upvoted 1 times
2 months, 1 week ago
Selected Answer: A
Answer: A
upvoted 1 times
3 months ago
Selected Answer: A
Answer : A
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
SUMMA SUMMA KICK ERUDHAE ! ULUKULAE NALA BHODHA ERUDHAE !
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: A
A is correct
upvoted 1 times
6 months ago
Selected Answer: A
we cant distribute write load to s read replica
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
Option A is right answer
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
A - is correct because reporting is OK to run on replicated data with some delay in replication.
B - is incorrect because main app cannot pointed to read replicate to handle write operation (it's not allowed on read replica) and there is nothing
mentioned that only read operations will be performed there.
upvoted 2 times
7 months ago
A is the correct ans
upvoted 1 times
7 months ago
Selected Answer: A
It's A from an old question: https://www.examtopics.com/discussions/amazon/view/81535-exam-aws-certified-solutions-architect-associate-saa-
c02/
upvoted 3 times
7 months ago
Selected Answer: A
Timeout occurs because of the query. So use read replica for query is correct answer.
upvoted 2 times
7 months ago
Selected Answer: A
It should be read load to read replica
upvoted 1 times
Topic 1
Question #192
A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new
documents each day. The hospital’s data team will scan the documents and will upload the documents to the AWS Cloud.
A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an
application can run SQL queries on the data. The solution must maximize scalability and operational e ciency.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Write the document information to an Amazon EC2 instance that runs a MySQL database.
B. Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.
C. Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned les and extracts the
medical information.
D. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw
text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
E. Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text.
Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
Correct Answer:
CD
Highly Voted
7 months ago
B and E are correct. Textract to extract text from files. Rekognition can also be used for text detection but after Rekognition - it's mentioned that
Transcribe is used. Transcribe is used for Speech to Text. So that option D may not be valid.
upvoted 8 times
Most Recent
2 days, 11 hours ago
Selected Answer: BE
B is correct because it suggests writing the document information to an Amazon S3 bucket, which provides scalable and durable object storage.
Using Amazon Athena, the data can be queried using SQL, enabling efficient analysis.
E is correct because it involves creating an AWS Lambda function triggered by new document uploads. Amazon Textract is used to convert the
documents to raw text, and Amazon Comprehend Medical extracts relevant medical information from the text.
A is incorrect because writing the document information to an Amazon EC2 instance with a MySQL database is not a scalable or efficient solution
for analysis.
C is incorrect because creating an Auto Scaling group of Amazon EC2 instances for processing scanned files and extracting information would
introduce unnecessary complexity and management overhead.
D is incorrect because using an EC2 instance with a MySQL database for storing document information is not the optimal solution for scalability
and efficient analysis.
upvoted 1 times
3 weeks, 1 day ago
It states in the question that the written documents are scanned. They are converted into images after being scanned. Rekognition would be best
to analyse images.
upvoted 1 times
1 month ago
Selected Answer: BE
Options B & E are correct answers.
upvoted 1 times
1 month, 1 week ago
Selected Answer: BE
Why CD are marked as correct??
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: BE
Community vote distribution
BE (100%)
operational efficiency = Serverless
S3 is serverless
upvoted 1 times
3 months ago
Selected Answer: BE
Answer : BE
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: BE
B and E are correct
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: BE
Lambda, Textract and S3 Athena perfect combination
upvoted 2 times
5 months, 1 week ago
Selected Answer: BE
Correct answers are B & E
upvoted 1 times
6 months ago
Selected Answer: BE
BE-Sql query on S3 and textract ot extract text and compregend to analyze.
upvoted 3 times
6 months, 2 weeks ago
Selected Answer: BE
Usually documents it can be few pages with text, so storing large text in Mysql is not very sufficient + deploy it on EC2 required operation
overhead, so A is out.
Only Textract is used for converting documents to text and Comprehend Medical to parse medical phrases. So E is correct.
Correct are BE
upvoted 4 times
6 months, 3 weeks ago
Can someone help me, should'nt it be AE? As document information is Text, is it to be stored in a relationship db instead of S3?
upvoted 1 times
7 months ago
Selected Answer: BE
answer BE
upvoted 4 times
7 months ago
BE of course
upvoted 2 times
7 months ago
Answer: BE
upvoted 2 times
7 months ago
B and E for Sure
upvoted 2 times
Topic 1
Question #193
A company is running a batch application on Amazon EC2 instances. The application consists of a backend with multiple Amazon RDS databases.
The application is causing a high number of reads on the databases. A solutions architect must reduce the number of database reads while
ensuring high availability.
What should the solutions architect do to meet this requirement?
A. Add Amazon RDS read replicas.
B. Use Amazon ElastiCache for Redis.
C. Use Amazon Route 53 DNS caching
D. Use Amazon ElastiCache for Memcached.
Correct Answer:
A
Highly Voted
7 months ago
Selected Answer: B
Use ElastiCache to reduce reading and choose redis to ensure high availability.
upvoted 21 times
4 months, 1 week ago
Where is the high availability when the database fails and the cache time runs out?
The answer is a.
upvoted 15 times
22 hours, 25 minutes ago
They run multiple databases
upvoted 1 times
1 month ago
Elasticache for Redis ensures high availability by using read replicas and Multi AZ with failover. It is also faster since it uses cache.
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
upvoted 1 times
1 month ago
A can't be an answer because the requirement is"reduce the number of database reads"
upvoted 2 times
Highly Voted
2 months, 2 weeks ago
Selected Answer: A
A vs B:
A: reduce the number of database reads on main + high availability provide
B: only reduce the number of DB reads
so A wins
upvoted 10 times
Most Recent
14 hours, 36 minutes ago
Going with B as the questions specifically requires reducing Database Reads. Can only be achieved using Elasticache for Redis
upvoted 1 times
22 hours, 26 minutes ago
Selected Answer: B
They want to educe the number of database reads. So ElastiCache.
Read Replica still reads DB.
upvoted 1 times
2 days ago
Selected Answer: A
Answer: A - To reduce the number of database reads while ensuring high availability on Amazon RDS, the solutions architect should add Amazon
RDS read replicas. This allows reads to be distributed among the replicas, reducing the number of reads on each database and increasing
availability.
upvoted 1 times
Community vote distribution
A (52%)
B (48%)
2 days, 11 hours ago
Selected Answer: A
By adding read replicas to the Amazon RDS databases, the read workload can be offloaded to the replicas, reducing the number of database reads
and improving performance. Read replicas provide high availability and can handle read traffic independently, distributing the load and reducing
the burden on the primary database.
B. Amazon ElastiCache for Redis is an in-memory data store primarily used for caching, which can improve read performance, but it doesn't directly
reduce the number of database reads.
C. Amazon Route 53 DNS caching is a service that caches DNS responses, which can improve overall network performance, but it doesn't
specifically address reducing database reads.
D. Amazon ElastiCache for Memcached is another caching service similar to Redis, but it doesn't directly address the issue of reducing database
reads.
upvoted 1 times
2 weeks, 3 days ago
Selected Answer: A
I go for A.
https://www.certification-questions.com/amazon-exam/aws-certified-solutions-
architect-associate-saa-c02-dumps.html
upvoted 1 times
3 weeks, 1 day ago
''A'' does not reduce the number of reads, it just spreads the reads to more replicas. We need to specifically decrease the number of reads that can
only be done by caching.
upvoted 4 times
4 weeks ago
ElastiCache reduces reading
upvoted 1 times
1 month ago
ElasticCacheReduceReads
upvoted 2 times
1 month ago
Selected Answer: A
A-reduce the amount of reeds
upvoted 2 times
1 month ago
Selected Answer: A
Option A is corect answer.
upvoted 2 times
1 month ago
Selected Answer: B
Answer is B. A is not valid (the requirement is to reduce the number of DATABASE reads)
upvoted 2 times
1 month, 1 week ago
Selected Answer: A
B and D performs the same affect in this case actually. And since it's a single-choice question. The correct answer should be A.
upvoted 2 times
1 month, 2 weeks ago
another controversial question from AWS, great.
upvoted 5 times
1 month, 2 weeks ago
You bring in more tier, avail goes down. No tier has 100% avail. A
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
- reduce the number of database reads
Increasing amount of read replicas (A) won't influence the amount of database reads. They will be just distributed to more replicas. While the
requirement sounds nonsense to me (who cares the amount of reads?) the only way to achieve this is to put a cache before the database.
For Redis vs Memcached:
"Redis lets you create multiple replicas of a Redis primary. This allows you to scale database reads and to have (!) highly available (!) clusters."
https://aws.amazon.com/elasticache/redis-vs-memcached/
upvoted 3 times
Topic 1
Question #194
A company needs to run a critical application on AWS. The company needs to use Amazon EC2 for the application’s database. The database must
be highly available and must fail over automatically if a disruptive event occurs.
Which solution will meet these requirements?
A. Launch two EC2 instances, each in a different Availability Zone in the same AWS Region. Install the database on both EC2 instances.
Con gure the EC2 instances as a cluster. Set up database replication.
B. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use AWS CloudFormation to automate provisioning of the EC2 instance if a disruptive event occurs.
C. Launch two EC2 instances, each in a different AWS Region. Install the database on both EC2 instances. Set up database replication. Fail
over the database to a second Region.
D. Launch an EC2 instance in an Availability Zone. Install the database on the EC2 instance. Use an Amazon Machine Image (AMI) to back up
the data. Use EC2 automatic recovery to recover the instance if a disruptive event occurs.
Correct Answer:
C
Highly Voted
6 months, 3 weeks ago
Selected Answer: A
Changing my vote to A. After reviewing a Udemy course of SAA-C03, it seems that A (multi-AZ and Clusters) is sufficient for HA.
upvoted 20 times
6 months ago
what number of class ?
upvoted 4 times
Highly Voted
7 months ago
Selected Answer: C
The question states that it is a critical app and it has to be HA. A could be the answer, but it's in the same AZ, so if the entire region fails, it doesn't
cater for the HA requirement.
However, the likelihood of a failure in two different regions at the same time is 0. Therefore, to me it seems that C is the better option to cater for
HA requirement.
In addition, C does state like A that the DB app is installed on an EC2 instance.
upvoted 16 times
1 month, 2 weeks ago
Design for region failure? may as well design for AWS failure and put replica in GCP and Azure :v
upvoted 2 times
4 months ago
The question doesn't ask which option is the most HA. It asks what meets the requirements.
upvoted 2 times
6 months, 3 weeks ago
but for C you need communication between the two VPC, which increase the complexity. With a should be enough for HA
upvoted 4 times
Most Recent
2 days, 11 hours ago
Selected Answer: A
By launching two EC2 instances in different Availability Zones and configuring them as a cluster with database replication, the database can achieve
high availability and automatic failover. If one instance or Availability Zone becomes unavailable, the other instance can continue serving the
application without interruption.
B. Launching a single EC2 instance and using an AMI for backup and provisioning automation does not provide automatic failover or high
availability.
C. Launching EC2 instances in different AWS Regions and setting up database replication is a multi-Region setup, which can provide disaster
recovery capabilities but does not provide automatic failover within a single Region.
Community vote distribution
A (58%)
C (42%)
D. Using EC2 automatic recovery can recover the instance if it fails due to hardware issues, but it does not provide automatic failover or high
availability across multiple instances or Availability Zones.
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
Cluster EC2s cannot span between AZs, which invalidates option A.
upvoted 7 times
1 month, 1 week ago
that's what i thought !!!
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: A
A is correct. Meets the requirements
upvoted 2 times
1 week, 4 days ago
You can't cluster EC2 when they are separate AZ's. This invalidate answer A. You have to look carefully and read each word carefully.
upvoted 2 times
2 months ago
answer is C.. since multi region infrastructure provides more HA than multi AZ.
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: C
better choice for HA : different region is better than AZ .
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: A
To be "highly available" it's sufficient to configure a multi-AZ (Availability Zone) instance.
NOT multi-region.
upvoted 4 times
2 months, 3 weeks ago
How could you setup cluster for EC2 in different regions as it requires instances to be placed in the same AZs.
upvoted 2 times
3 months ago
Selected Answer: A
It has to be A ,as this is asking for HA nor DR .If it had been DR ,we can think of entire region failure which will make us to think bout having
another instance in another region .
upvoted 4 times
3 months ago
Selected Answer: A
Answer : A
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
For the once wondering between A and C.
"..Configure the EC2 instances as a cluster" > this give you the automatic failover to the second DB. C point to manual failover making the answer
incorrect.
upvoted 4 times
3 months, 2 weeks ago
Selected Answer: A
looks like A
upvoted 1 times
4 months ago
Selected Answer: A
Where should the database be stored? It should be stored on an EBS which doesn't support multi-region failover.
upvoted 1 times
4 months ago
Selected Answer: A
High availability = Availability Zone
Disaster Recovery = Multi-Region
“DISRUPTIVE” DOES NOT suggest DISASTER!
upvoted 5 times
4 months, 1 week ago
Selected Answer: A
Voted for A after some consulatio with more experienced AWS architect... Clue over here is that region failover must be done automatically
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
ECS Spread placement strategy
ECS groups available capacity used to place Tasks into ECS Clusters with ECS Tasks being launched into an ECS Cluster. An ECS Clusters configured
to use EC2 will have EC2 Instances registered with it and each EC2 instance resides in a single Availability Zone. You should be ensuring that you
have EC2 instances registered with your Cluster from multiple Availability Zones.
https://aws.amazon.com/blogs/containers/amazon-ecs-availability-best-
practices/#:~:text=An%20ECS%20Clusters%20configured%20to,Cluster%20from%20multiple%20Availability%20Zones.
upvoted 2 times
Topic 1
Question #195
A company’s order system sends requests from clients to Amazon EC2 instances. The EC2 instances process the orders and then store the orders
in a database on Amazon RDS. Users report that they must reprocess orders when the system fails. The company wants a resilient solution that
can process orders automatically if a system outage occurs.
What should a solutions architect do to meet these requirements?
A. Move the EC2 instances into an Auto Scaling group. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon
Elastic Container Service (Amazon ECS) task.
B. Move the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB). Update the order system to send messages
to the ALB endpoint.
C. Move the EC2 instances into an Auto Scaling group. Con gure the order system to send messages to an Amazon Simple Queue Service
(Amazon SQS) queue. Con gure the EC2 instances to consume messages from the queue.
D. Create an Amazon Simple Noti cation Service (Amazon SNS) topic. Create an AWS Lambda function, and subscribe the function to the SNS
topic. Con gure the order system to send messages to the SNS topic. Send a command to the EC2 instances to process the messages by
using AWS Systems Manager Run Command.
Correct Answer:
D
2 days, 11 hours ago
Selected Answer: C
By moving the EC2 into an ASG and configuring them to consume messages from an SQS, the system can decouple the order processing from the
order system itself. This allows the system to handle failures and automatically process orders even if the order system or EC2 experience outages.
A. Using an ASG with an EventBridge rule targeting an ECS task does not provide the necessary decoupling and message queueing for automatic
order processing during outages.
B. Moving the EC2 instances into an ASG behind an
ALB does not address the need for message queuing and automatic processing during outages.
D. Using SNS and Lambda can provide notifications and orchestration capabilities, but it does not provide the necessary message queueing and
consumption for automatic order processing during outages. Additionally, using Systems Manager Run Command to send commands for order
processing adds complexity and does not provide the desired level of automation.
upvoted 1 times
4 days, 18 hours ago
D is so unnecessary .... this confuses people
upvoted 1 times
2 days, 11 hours ago
Thx Allmightly for voting system! Answers provided by the site (and not by community) are 20% wrong.
upvoted 1 times
1 week, 4 days ago
The answer D is so complex and unnecessary. Why moderator is not providing an explanation of answers when there are heavy conflicts. These
kind of answers put your knowledge in question which is not good going into the exam.
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
To meet the company's requirements of having a resilient solution that can process orders automatically in case of a system outage, the solutions
architect needs to implement a fault-tolerant architecture. Based on the given scenario, a potential solution is to move the EC2 instances into an
Auto Scaling group and configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue. The EC2
instances can then consume messages from the queue.
upvoted 2 times
3 months ago
Selected Answer: C
Answer : C
upvoted 1 times
4 months, 1 week ago
Community vote distribution
C (90%)
5%
Selected Answer: C
C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service (Amazon
SQS) queue. Configure the EC2 instances to consume messages from the queue.
To meet the requirements of the company, a solutions architect should ensure that the system is resilient and can process orders automatically in
the event of a system outage. To achieve this, moving the EC2 instances into an Auto Scaling group is a good first step. This will enable the system
to automatically add or remove instances based on demand and availability.
upvoted 2 times
4 months, 1 week ago
However, it's also necessary to ensure that orders are not lost if a system outage occurs. To achieve this, the order system can be configured to
send messages to an Amazon Simple Queue Service (Amazon SQS) queue. SQS is a highly available and durable messaging service that can
help ensure that messages are not lost if the system fails.
Finally, the EC2 instances can be configured to consume messages from the queue, process the orders and then store them in the database on
Amazon RDS. This approach ensures that orders are not lost and can be processed automatically if a system outage occurs. Therefore, option C
is the correct answer.
upvoted 2 times
4 months, 1 week ago
Option A is incorrect because it suggests creating an Amazon EventBridge rule to target an Amazon Elastic Container Service (ECS) task.
While this may be a valid solution in some cases, it is not necessary in this scenario.
Option B is incorrect because it suggests moving the EC2 instances into an Auto Scaling group behind an Application Load Balancer (ALB)
and updating the order system to send messages to the ALB endpoint. While this approach can provide resilience and scalability, it does not
address the issue of order processing and the need to ensure that orders are not lost if a system outage occurs.
Option D is incorrect because it suggests using Amazon Simple Notification Service (SNS) to send messages to an AWS Lambda function,
which will then send a command to the EC2 instances to process the messages by using AWS Systems Manager Run Command. While this
approach may work, it is more complex than necessary and does not take advantage of the durability and availability of SQS.
upvoted 2 times
5 months, 1 week ago
Selected Answer: C
My question is; can orders be sent directly into an SQS queue ? How about the protocol for management of the messages from the queue ? can
EC2 instances be programmed to process them like Lambda ?
upvoted 1 times
6 months ago
Selected Answer: D
I choose D
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
To meet the requirements of the company, a solution should be implemented that can automatically process orders if a system outage occurs.
Option C meets these requirements by using an Auto Scaling group and Amazon Simple Queue Service (SQS) to ensure that orders can be
processed even if a system outage occurs.
In this solution, the EC2 instances are placed in an Auto Scaling group, which ensures that the number of instances can be automatically scaled up
or down based on demand. The ordering system is configured to send messages to an SQS queue, which acts as a buffer and stores the messages
until they can be processed by the EC2 instances. The EC2 instances are configured to consume messages from the queue and process them. If a
system outage occurs, the messages in the queue will remain available and can be processed once the system is restored.
upvoted 2 times
6 months, 1 week ago
Selected Answer: A
c is right
upvoted 1 times
6 months, 1 week ago
C. Move the EC2 instances into an Auto Scaling group. Configure the order system to send messages to an Amazon Simple Queue Service (Amazon
SQS) queue. Configure the EC2 instances to consume messages from the queue.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
C, decouple applications and functionality, give ability to reprocess message if failed due to networking issue or overloaded other systems
upvoted 2 times
6 months, 2 weeks ago
C
Configuring the EC2 instances to consume messages from the SQS queue will ensure that the instances can process orders automatically, even if a
system outage occurs.
e.
upvoted 1 times
7 months ago
SQS order
upvoted 1 times
7 months ago
Selected Answer: C
C. SQS meets this requirement.
upvoted 2 times
7 months ago
Selected Answer: C
C is the right answer
upvoted 1 times
7 months ago
C is the answer
upvoted 1 times
Topic 1
Question #196
A company runs an application on a large eet of Amazon EC2 instances. The application reads and writes entries into an Amazon DynamoDB
table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a
solution that minimizes cost and development effort.
Which solution meets these requirements?
A. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the
original stack.
B. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Con gure the monitoring application to use Amazon
DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items
that have a timestamp that is older than 30 days.
C. Con gure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Con gure the Lambda
function to delete items in the table that are older than 30 days.
D. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the
table. Con gure DynamoDB to use the attribute as the TTL attribute.
Correct Answer:
D
Highly Voted
6 months, 4 weeks ago
Selected Answer: D
changing my answer to D after researching a bit.
The DynamoDB TTL feature allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and
time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
upvoted 25 times
Most Recent
2 days, 11 hours ago
Selected Answer: D
By adding a TTL attribute to the DynamoDB table and setting it to the current timestamp plus 30 days, DynamoDB will automatically delete the
items that are older than 30 days. This solution eliminates the need for manual deletion or additional infrastructure components.
A. Redeploying the CloudFormation stack every 30 days and deleting the original stack introduces unnecessary complexity and operational
overhead.
B. Using an EC2 instance with a monitoring application and a script to delete items older than 30 days adds additional infrastructure and
maintenance efforts.
C. Configuring DynamoDB Streams to invoke a Lambda function to delete items older than 30 days adds complexity and requires additional
development and operational effort compared to using the built-in TTL feature of DynamoDB.
upvoted 1 times
4 days, 17 hours ago
D: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times
1 month ago
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed.
upvoted 2 times
1 month, 2 weeks ago
Selected Answer: D
C is incorrect because it can take more than 15 minutes to delete the old data. Lambda won't work
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
Clear case for TTL - every object gets deleted after a certain period of time
upvoted 1 times
1 month, 3 weeks ago
Community vote distribution
D (91%)
9%
Selected Answer: D
Use DynamoDB TTL feature to achieve this..
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: D
C is absurd. DynamoDB usually is a RDS with high iops (read/write operations on tables), executing a Lambda function eachtime you insert a item
will not be cost-effective.It's much better create such a field the question propose, and manage the delete with a SQL delete sentence:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.DeleteData.html
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the
date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at
no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
TTL is useful if you store items that lose relevance after a specific time.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
D: This solution is more efficient and cost-effective than alternatives that would require additional resources and maintenance.
upvoted 1 times
6 months ago
Selected Answer: D
D DyanmoDB TTL will expire the items
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
To minimize cost and development effort, a solution that requires minimal changes to the existing application and infrastructure would be the most
appropriate. Option D meets these requirements by using DynamoDB's Time-To-Live (TTL) feature, which allows you to specify an attribute on each
item in a table that has a timestamp indicating when the item should expire.
In this solution, the application is extended to add an attribute that has a value of the current timestamp plus 30 days to each new item that is
created in the table. DynamoDB is then configured to use this attribute as the TTL attribute, which causes items to be automatically deleted from
the table when their TTL value is reached. This solution requires minimal changes to the existing application and infrastructure and does not
require any additional resources or a complex setup.
upvoted 1 times
6 months, 1 week ago
Option A involves using AWS CloudFormation to redeploy the solution every 30 days, but this would require significant development effort and
could cause downtime for the application.
Option B involves using an EC2 instance and a monitoring application to delete items that are older than 30 days, but this requires additional
infrastructure and maintenance effort.
Option C involves using DynamoDB Streams and a Lambda function to delete items that are older than 30 days, but this requires additional
infrastructure and maintenance effort.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
TTL does the trick
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the
date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. - check this link
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
https://aws.amazon.com/about-aws/whats-new/2017/02/amazon-dynamodb-now-supports-automatic-item-expiration-with-time-to-live-ttl/
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D - Right answer
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
DynamoDB has the TTL (Time to Live) functionality that gives you the option to set the duration you want a particular data to persist in the table.
https://aws.amazon.com/premiumsupport/knowledge-center/ttl-dynamodb/
upvoted 1 times
Topic 1
Question #197
A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle
Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the
application. The AWS application environment should be highly available.
Which combination of actions should the company take to meet these requirements? (Choose two.)
A. Refactor the application as serverless with AWS Lambda functions running .NET Core.
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.
Correct Answer:
BD
Highly Voted
5 months, 2 weeks ago
Selected Answer: BE
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.
Rehosting the application in Elastic Beanstalk with the .NET platform can minimize development changes. Multi-AZ deployment of Elastic Beanstalk
will increase the availability of application, so it meets the requirement of high availability.
Using AWS Database Migration Service (DMS) to migrate the database to Amazon RDS Oracle will ensure compatibility, so the application can
continue to use the same database technology, and the development team can use their existing skills. It also migrates to a managed service,
which will handle the availability, so the team do not have to worry about it. Multi-AZ deployment will increase the availability of the database.
upvoted 9 times
Most Recent
2 days, 11 hours ago
Selected Answer: BE
B. This allows the company to migrate the application to AWS without significant code changes while leveraging the scalability and high availability
provided by EBS's Multi-AZ deployment.
E. This enables the company to migrate the Oracle database to RDS while maintaining compatibility with the existing application and leveraging
the Multi-AZ deployment for high availability.
A. would require significant development changes and may not provide the same level of compatibility as rehosting or replatforming options.
C. would still require changes to the application and the underlying infrastructure, whereas rehosting with EBS minimizes the need for modification.
D. would likely require significant changes to the application code, as DynamoDB is a NoSQL database with a different data model compared to
Oracle.
upvoted 1 times
1 week, 4 days ago
Answer is BE. No idea why D was chosen. That requires development work and question clearly states minimize development changes, changing db
from Oracle to DynamoDB is LOT of development.
upvoted 1 times
1 month ago
Selected Answer: BE
B + E are the anwers that fulfil the requirements.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BE
B and E
upvoted 1 times
1 month, 3 weeks ago
why not C?
upvoted 2 times
Community vote distribution
BE (100%)
3 weeks, 1 day ago
It runs on a windows server, shifting the whole this to Linux based EC2 would be extra work and of no sense
upvoted 1 times
3 months ago
Selected Answer: BE
Answer : BE
upvoted 1 times
5 months, 3 weeks ago
Why A is wrong?
upvoted 1 times
5 months, 3 weeks ago
Because that needs some development,
upvoted 2 times
6 months, 1 week ago
Selected Answer: BE
B. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.
To minimize development changes while moving the application to AWS and to ensure a high level of availability, the company can rehost the
application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment. This will allow the application to run in a highly available
environment without requiring any changes to the application code.
The company can also use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Oracle on Amazon RDS in a Multi-AZ
deployment. This will allow the company to maintain the existing database platform while still achieving a high level of availability.
upvoted 3 times
6 months, 1 week ago
Selected Answer: BE
B&E Option ,because D is for No-Sql
upvoted 1 times
5 months, 2 weeks ago
And requires additional development effort
upvoted 1 times
6 months, 1 week ago
B&E Option
upvoted 1 times
6 months, 4 weeks ago
B- According to the AWS documentation, the simplest way to migrate .NET applications to AWS is to repost the applications using either AWS
Elastic Beanstalk or Amazon EC2.
E - RDS with Oracle is a no-brainer
upvoted 3 times
7 months ago
Selected Answer: BE
same as everyone else
upvoted 3 times
7 months ago
B E should be correct. Question says "Minimize development changes" - so should go for same oracle DB
upvoted 1 times
7 months ago
Selected Answer: BE
B for Minimal Development(Elastic BeanStalk)
E for RDS with Oracle
upvoted 1 times
7 months ago
Selected Answer: BE
https://www.examtopics.com/discussions/amazon/view/67840-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months ago
Selected Answer: BE
B E is correct
upvoted 1 times
Topic 1
Question #198
A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database
for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are
possible at this time. The company needs a solution that minimizes operational overhead.
Which solution meets these requirements?
A. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data
storage.
D. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB
compatibility) for data storage.
Correct Answer:
D
Highly Voted
6 months, 2 weeks ago
Selected Answer: D
If you see MongoDB, just go ahead and look for the answer that says DocumentDB.
upvoted 13 times
Most Recent
2 days, 10 hours ago
Selected Answer: D
This solution allows the company to leverage EKS to manage the K8s cluster and Fargate to handle the compute resources without requiring
manual management of EC2 worker nodes. The use of DocumentDB provides a fully managed MongoDB-compatible database service in AWS.
A. would require managing and scaling the EC2 instances manually, which increases operational overhead.
B. would require significant changes to the application code as DynamoDB is a NoSQL database with a different data model compared to
MongoDB.
C. would also require code changes to adapt to DynamoDB's different data model, and managing EC2 worker nodes increases operational
overhead.
upvoted 1 times
1 month ago
Selected Answer: D
The solution meets these requirements is option D.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
minimizes operational overhead = Serverless (Fargate)
MongoDB = DocumentDB
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
To minimize operational overhead and avoid making any code or deployment method changes, the company can use Amazon Elastic Kubernetes
Service (EKS) with AWS Fargate for computing and Amazon DocumentDB (with MongoDB compatibility) for data storage. This solution allows the
company to run the containerized application on EKS without having to manage the underlying infrastructure or make any changes to the
application code.
AWS Fargate is a fully-managed container execution environment that allows you to run containerized applications without the need to manage
the underlying EC2 instances.
Amazon DocumentDB is a fully-managed document database service that supports MongoDB workloads, allowing the company to use the same
database platform as in their on-premises environment without having to make any code changes.
upvoted 4 times
6 months, 1 week ago
Selected Answer: D
Community vote distribution
D (100%)
Reason A &B Elimnated as its Kubernates
why D read here https://containersonaws.com/introduction/ec2-or-aws-fargate/
upvoted 2 times
6 months, 1 week ago
Selected Answer: D
Option D
upvoted 2 times
6 months, 4 weeks ago
DDDDDDD
upvoted 1 times
7 months ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/67897-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
7 months ago
Selected Answer: D
D meets the requirements
upvoted 1 times
7 months ago
Selected Answer: D
D
EKS because of Kubernetes so A and B are eliminated
not C because of MongoDB and Fargate is more expensive
upvoted 1 times
Topic 1
Question #199
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker
recognition and generates transcript les. The company wants to query the transcript les to analyze the business patterns. The transcript les
must be stored for 7 years for auditing purposes.
Which solution will meet these requirements?
A. Use Amazon Rekognition for multiple speaker recognition. Store the transcript les in Amazon S3. Use machine learning models for
transcript le analysis.
B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript le analysis.
C. Use Amazon Translate for multiple speaker recognition. Store the transcript les in Amazon Redshift. Use SQL queries for transcript le
analysis.
D. Use Amazon Rekognition for multiple speaker recognition. Store the transcript les in Amazon S3. Use Amazon Textract for transcript le
analysis.
Correct Answer:
C
Highly Voted
6 months, 1 week ago
Selected Answer: B
The correct answer is B: Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
Amazon Transcribe is a service that automatically transcribes spoken language into written text. It can handle multiple speakers and can generate
transcript files in real-time or asynchronously. These transcript files can be stored in Amazon S3 for long-term storage.
Amazon Athena is a query service that allows you to analyze data stored in Amazon S3 using SQL. You can use it to analyze the transcript files and
identify patterns in the data.
Option A is incorrect because Amazon Rekognition is a service for analyzing images and videos, not transcribing spoken language.
Option C is incorrect because Amazon Translate is a service for translating text from one language to another, not transcribing spoken language.
Option D is incorrect because Amazon Textract is a service for extracting text and data from documents and images, not transcribing spoken
language.
upvoted 11 times
2 months, 3 weeks ago
What bothers me is the 7 years of storage.
upvoted 3 times
5 months ago
The correct answer is C.
https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html
You can transcribe streaming media in real time or you can upload and transcribe media files. To see which languages are supported for each
type of transcription, refer to the Supported languages and language-specific features table.
upvoted 1 times
5 months ago
Disregard. I meant B
upvoted 1 times
5 months ago
https://aws.amazon.com/about-aws/whats-new/2022/06/amazon-transcribe-supports-automatic-language-identification-multi-lingual-
audio/
Amazon Translate is a service for multi-language identification, which identifies all languages spoken in the audio file and creates transcript
using each identified language.
upvoted 1 times
5 months ago
Disregard. I meant Amazon Transcribe
upvoted 1 times
Most Recent
2 days, 10 hours ago
Community vote distribution
B (91%)
6%
Amazon Transcribe provides accurate transcription of audio recordings with multiple speakers, generating transcript files. These files can be stored
in Amazon S3. To analyze the transcripts and extract insights, Amazon Athena allows SQL-based querying of the stored files.
A. Amazon Rekognition is for image and video analysis, not audio transcription.
C. Amazon Translate is for language translation, not speaker recognition or transcript analysis. Amazon Redshift may not be the best choice for
storing and querying transcript files.
D. Amazon Rekognition is for image and video analysis, and Amazon Textract is for document extraction, not suitable for audio transcription or
analysis. Storing the transcript files in S3 is appropriate, but the analysis requires a different service like Amazon Athena.
upvoted 1 times
1 month ago
Selected Answer: B
the solution that meets these requirements is option B.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
1 month, 3 weeks ago
Amazon Transcribe is a service that convert speech into text, so B is the answer
upvoted 1 times
3 months ago
Selected Answer: B
Answer : B
upvoted 2 times
5 months ago
Selected Answer: C
https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html
upvoted 1 times
5 months, 3 weeks ago
The correct answer is C.
Wouldn't it be the right answer to save and analyze using Amazon Redshift, which can be used to analyze big data on data warhousing?
upvoted 2 times
6 months ago
B
https://aws.amazon.com/transcribe/
Amazon Transcribe
Automatically convert speech to text
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Only B
ashttps://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c03/view/7/#
Rekognition - Image and Video Analysis
Transcribe - Text to speech
Translate - Translate a text-based file from a language to another language
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
Rekognition - Image and Video Analysis
Transcribe - Text to speech
Translate - Translate a text based file from a language to another language
So by logical deduction is it B
upvoted 2 times
6 months, 1 week ago
Selected Answer: B
B is the right answer. You can specify the S3 bucket with transcribe to store the data for 7 years and use Athena for Analytics later. Transcribe also
supports Multiple speaker recognition.
upvoted 3 times
6 months, 4 weeks ago
Selected Answer: B
Answer is B - pretty straightforward.
upvoted 1 times
6 months, 4 weeks ago
Selected Answer: B
Answer is B.
upvoted 1 times
7 months ago
Why is it not C?
"Amazon Translate is a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand.
You can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages."
upvoted 2 times
7 months ago
Disregard. I meant B
upvoted 1 times
7 months ago
Why it is B instead of C? The question didn't mention to use S3 to store the data, so it cannot be athena ?
upvoted 1 times
5 months, 2 weeks ago
"The transcript files must be stored for 7 years for auditing purposes" which implied S3 storage. C is text translation (text from language 1 to
language 2), you are asked for audio transcription (audio to text), which are completely different things.
upvoted 2 times
7 months ago
B Transcribe
upvoted 1 times
Topic 1
Question #200
A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the
application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an
AWS managed solution that will control access to the REST API to reduce development efforts.
Which solution will meet these requirements with the LEAST operational overhead?
A. Con gure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.
B. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.
C. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email
address has proper access.
D. Con gure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
Correct Answer:
A
Highly Voted
6 months, 1 week ago
Selected Answer: D
KEYWORD: LEAST operational overhead
To control access to the REST API and reduce development efforts, the company can use an Amazon Cognito user pool authorizer in API Gateway.
This will allow Amazon Cognito to validate each request and ensure that only authenticated users can access the API. This solution has the LEAST
operational overhead, as it does not require the company to develop and maintain any additional infrastructure or code.
Therefore, Option D is the correct answer.
Option D. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.
upvoted 6 times
Most Recent
2 days, 10 hours ago
By configuring an Amazon Cognito user pool authorizer in API Gateway, you can leverage the built-in functionality of Amazon Cognito to
authenticate and authorize users. This eliminates the need for custom development or managing access keys. Amazon Cognito handles user
authentication, securely manages user identities, and provides seamless integration with API Gateway for controlling access to the REST API.
A. Configuring an AWS Lambda function as an authorizer in API Gateway would require custom implementation and management of the
authorization logic.
B. Creating and assigning an API key for each user would require additional management and validation logic in an AWS Lambda function.
C. Sending the user's email address in the header and validating it with an AWS Lambda function would also require custom implementation and
management of the authorization logic.
Option D, using an Amazon Cognito user pool authorizer, provides a streamlined and managed solution for controlling access to the REST API with
minimal operational overhead.
upvoted 1 times
1 month ago
Selected Answer: D
solution will meet these requirements with the LEAST operational overhead is option D.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
LEAST operational overhead = Serverless = Cognito user pool
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: D
D is correct.
upvoted 1 times
3 months ago
Selected Answer: D
Answer : D
Community vote distribution
D (96%)
4%
upvoted 1 times
3 months, 1 week ago
D is correct
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
There is a difference between "Grant Access" (Authentication done by Cognito user pool), and "Control Access" to APIs (Authorization using IAM
policy, custom Authorizer, Federated Identity Pool). The question very specifically asks about *Control access to REST APIs* which is a clear case of
Authorization and not Authentication. So custom Authorizer using Lambda in API Gateway is the solution.
Pls refer to this blog: https://aws.amazon.com/blogs/security/building-fine-grained-authorization-using-amazon-cognito-api-gateway-and-iam/
upvoted 1 times
5 months, 2 weeks ago
This answer looks to be entirely wrong
This article explains how to do what you claim cannot be done: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-
integrate-with-cognito.html
It starts "As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an
Amazon Cognito user pool to control who can access your API in Amazon API Gateway."
This suggests that Amazon Cognito user pools CAN be used for Authorization, which you say above cannot be done.
Further, it explains how to do this...
"To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure
an API method to use that authorizer"
So whilst A is a valid approach, it looks like D achieves the same with "the LEAST operational overhead".
upvoted 7 times
3 months, 3 weeks ago
Control access to a REST API using Amazon Cognito user pools as authorizer
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
upvoted 3 times
5 months, 2 weeks ago
Option D: there is nothing called Cognito user pool authorizer. We only have custom Authorizer function through Lambda.
upvoted 1 times
5 months, 2 weeks ago
Oh yes there is :)
upvoted 2 times
6 months ago
Selected Answer: D
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
upvoted 3 times
6 months, 1 week ago
Selected Answer: D
Option D - As company already has all the users authentication information in Cognito
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: D
D is correct
upvoted 2 times
7 months ago
API + Cognito integration - Answer D
upvoted 2 times
7 months ago
Selected Answer: D
Answer : D
Check Gabs90 link
Use the Amazon Cognito console, CLI/SDK, or API to create a user pool—or use one that's owned by another AWS account
upvoted 1 times
7 months ago
Selected Answer: D
D - https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cognito-user-pool-authorizer/
upvoted 1 times
7 months ago
Selected Answer: D
seems to be D to me: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
upvoted 4 times
7 months ago
Selected Answer: D
D is correct
upvoted 1 times
Topic 1
Question #201
A company is developing a marketing communications service that targets mobile app users. The company needs to send con rmation messages
with Short Message Service (SMS) to its users. The users must be able to reply to the SMS messages. The company must store the responses for
a year for analysis.
What should a solutions architect do to meet these requirements?
A. Create an Amazon Connect contact ow to send the SMS messages. Use AWS Lambda to process the responses.
B. Build an Amazon Pinpoint journey. Con gure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis and archiving.
C. Use Amazon Simple Queue Service (Amazon SQS) to distribute the SMS messages. Use AWS Lambda to process the responses.
D. Create an Amazon Simple Noti cation Service (Amazon SNS) FIFO topic. Subscribe an Amazon Kinesis data stream to the SNS topic for
analysis and archiving.
Correct Answer:
A
2 days, 10 hours ago
Selected Answer: B
By using Pinpoint, the company can effectively send SMS messages to its mobile app users. Additionally, Pinpoint allows the configuration of
journeys, which enable the tracking and management of user interactions. The events generated during the journey, including user responses to
SMS, can be captured and sent to an Kinesis data stream. This data stream can then be used for analysis and archiving purposes.
A. Creating an Amazon Connect contact flow is primarily focused on customer support and engagement, and it lacks the capability to store and
process SMS responses for analysis.
C. Using SQS is a message queuing service and is not specifically designed for handling SMS responses or capturing them for analysis.
D. Creating an SNS FIFO topic and subscribing a Kinesis data stream is not the most appropriate solution for capturing and storing SMS responses,
as SNS is primarily used for message publishing and distribution.
In summary, option B is the best choice as it leverages Pinpoint to send SMS messages and captures user responses for analysis and archiving
using an Kinesis data stream.
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: B
Option B is correct answer: link: https://aws.amazon.com/pinpoint/, and video under the link.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: B
Two-Way Messaging
Receive SMS messages from your customers and reply back to them in a chat-like interactive experience. With Amazon Pinpoint, you can create
automatic responses when customers send you messages that contain certain keywords.
upvoted 1 times
1 month, 4 weeks ago
Based on my research Kinesis stream is real time data ingestion, and also stores only event data and not the actual people responses, furthermore
there is no requirement to have real time data streaming. That is probably why I am hesitating agree here with everyone on B and rather choose A.
upvoted 1 times
2 months ago
Selected Answer: B
The answer is B. AWS Pinpoint is for Marketing communications.
AWS Connect is for Contact center.
upvoted 1 times
2 months ago
Selected Answer: A
According to the following link I would choose Option A.
https://docs.aws.amazon.com/connect/latest/adminguide/web-and-mobile-chat.html
upvoted 1 times
Community vote distribution
B (82%)
Other
3 weeks, 1 day ago
no no, there is no SMS, note the question stated all activities through SMS, also Amazon connect flow most likely working on web application
UI, but if you see question clearly, this is receiving and sending SMS not through application UI (Web/Mobile App). So for those reason we
choose B
upvoted 1 times
4 months, 4 weeks ago
Selected Answer: B
Amazon Pinpoint is a flexible, scalable and fully managed push notification and SMS service for mobile apps.
upvoted 3 times
5 months, 1 week ago
It's B, see following link https://docs.aws.amazon.com/pinpoint/latest/developerguide/event-streams.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
https://aws.amazon.com/pinpoint/product-details/sms/
Two-Way Messaging:
Receive SMS messages from your customers and reply back to them in a chat-like interactive experience. With Amazon Pinpoint, you can create
automatic responses when customers send you messages that contain certain keywords. You can even use Amazon Lex to create conversational
bots.
A majority of mobile phone users read incoming SMS messages almost immediately after receiving them. If you need to be able to provide your
customers with urgent or important information, SMS messaging may be the right solution for you.
You can use Amazon Pinpoint to create targeted groups of customers, and then send them campaign-based messages. You can also use Amazon
Pinpoint to send direct messages, such as appointment confirmations, order updates, and one-time passwords.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
D:
Amazon Simple Notification Service (SNS) is a fully managed messaging service that enables you to send and receive SMS messages in a cost-
effective and highly scalable way. By creating an SNS FIFO topic, you can ensure that the SMS messages are delivered to your users in the order
they were sent and that the SMS responses are processed and stored in the same order. You can also configure your SNS FIFO topic to publish
SMS responses to an Amazon Kinesis data stream, which will allow you to store and analyze the responses for a year.
Amazon Pinpoint ?¿?¿? NO!
is not correct solution because while Amazon Pinpoint allows you to send SMS and Email campaigns, as well as handle push notifications to a user
base, it doesn't provide SMS sending feature by itself. Furthermore, it's a service mainly focused on sending and tracking marketing campaigns, not
for managing two-way SMS communication and the reception of reply.
upvoted 2 times
4 months, 3 weeks ago
What do think about https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-two-way.html?
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
To send SMS messages and store the responses for a year for analysis, the company can use Amazon Pinpoint. Amazon Pinpoint is a fully-managed
service that allows you to send targeted and personalized SMS messages to your users and track the results.
To meet the requirements of the company, a solutions architect can build an Amazon Pinpoint journey and configure Amazon Pinpoint to send
events to an Amazon Kinesis data stream for analysis and archiving. The Kinesis data stream can be configured to store the data for a year, allowing
the company to analyze the responses over time.
So, Option B is the correct answer.
Option B. Build an Amazon Pinpoint journey. Configure Amazon Pinpoint to send events to an Amazon Kinesis data stream for analysis and
archiving.
upvoted 3 times
6 months, 1 week ago
Selected Answer: B
We need to analyze and archiving A doesnt help with it.
upvoted 1 times
6 months, 1 week ago
B is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: B
Answer B, This is Pinpoint usecase
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: B
Anytime you see marketing or campaign, just pick AWS Pinpoint.
upvoted 4 times
6 months, 3 weeks ago
Selected Answer: B
Amazon Pinpoint is perfect choice for this scenario, as it provides 2 way communication and can stream events to kinesis Data stream for further
analysis.
upvoted 4 times
6 months, 3 weeks ago
Selected Answer: B
The diagram on the link shows "Campaign and journeys" with the arrow directing to Channels which includes SMS, emails etc. So the correct choice
is B.
https://aws.amazon.com/pinpoint/
upvoted 1 times
Topic 1
Question #202
A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the
encryption key must be automatically rotated every year.
Which solution will meet these requirements with the LEAST operational overhead?
A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation
behavior of SSE-S3 encryption keys.
B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket’s default
encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket’s default encryption behavior to use the
customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.
D. Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS)
key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.
Correct Answer:
B
Highly Voted
6 months, 1 week ago
Selected Answer: B
SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is shared
among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined.
SSE-KMS - has two flavors:
AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it. Rotation is
automatic - once per 1095 days (3 years),
Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it, it will be
automatically rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an imported material,
there is no automated rotation. Only manual rotation.
SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.
upvoted 20 times
1 month ago
AWS managed CMK rotates every 365 days (not 1095 days). Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt
upvoted 1 times
Highly Voted
6 months, 1 week ago
Selected Answer: A
KEYWORD: LEAST operational overhead
To encrypt the data when it is stored in the S3 bucket and automatically rotate the encryption key every year with the least operational overhead,
the company can use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). SSE-S3 uses keys that are managed by Amazon
S3, and the built-in key rotation behavior of SSE-S3 encryption keys automatically rotates the keys every year.
To meet the requirements of the company, the solutions architect can move the data to the S3 bucket and enable server-side encryption with SSE-
S3. This solution requires no additional configuration or maintenance and has the least operational overhead.
Hence, the correct answer is;
Option A. Move the data to the S3 bucket. Use server-side encryption with Amazon S3-managed encryption keys (SSE-S3). Use the built-in key
rotation behavior of SSE-S3 encryption keys.
upvoted 17 times
5 months ago
The order of these events is being ignored here in my opinion. The encryption checkbox needs to be checked before data is moved into the S3
bucket or it will not be encrypted otherwise, you'll have to encrypt manually and reload into S3 bucket. If the box was checked before moving
data into S3 then you are good to go !
upvoted 2 times
5 months ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html
upvoted 1 times
Community vote distribution
B (63%)
A (37%)
6 months, 1 week ago
Option B involves using a customer-managed AWS KMS key and enabling automatic key rotation, but this requires the company to manage the
KMS key and monitor the key rotation process.
Option C involves using a customer-managed AWS KMS key, but this requires the company to manually rotate the key every year, which
introduces additional operational overhead.
Option D involves encrypting the data with customer key material and creating a KMS key without key material, but this requires the company
to manage the customer key material and import it into the KMS key, which introduces additional operational overhead.
upvoted 2 times
5 months, 2 weeks ago
But...
For A there is no reference to how often these keys are rotated, and to rotate to a new key, you need to upload it, which is operational
overhead. So not only does it not necessarily meet the 'rotate keys every year' requirement, but every year it requires operational overhead.
More importantly, the question states move the objects first, and then configure encryption, but ..."There is no change to the encryption of
the objects that existed in the bucket before default encryption was enabled." from
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html
So A is clearly wrong.
For B, whilst you have to set up KMS once, you then don't have to anything else, which i would say is LEAST operational overhead.
upvoted 11 times
5 months, 3 weeks ago
God bless you, man! The most articulated answers, easy to understand. Good job!
upvoted 3 times
5 months, 2 weeks ago
But wrong :)
upvoted 4 times
4 months, 3 weeks ago
Reviewed it the second time. Some of them are wrong, indeed.
upvoted 1 times
Most Recent
1 day, 17 hours ago
Selected Answer: B
A. While using SSE-S3 the key rotation is handled automatically by AWS. AWS rotates the encryption keys at least once every 1095 days (3 years) on
your behalf.
B. By using a customer managed key in AWS KMS with automatic key rotation enabled, and setting the S3 bucket's default encryption behavior to
use this key, the data stored in the S3 bucket will be encrypted and the encryption key will be automatically rotated every year.
C. This answer is not the most optimal solution as it suggests manually rotating the KMS key every year, which introduces manual intervention and
increases operational overhead.
D. This answer is not the most suitable option as it involves encrypting the data with customer key material and managing the key rotation
manually. It adds complexity and management overhead compared to using AWS KMS for key management and encryption.
upvoted 1 times
4 days, 17 hours ago
chat gpt says it's B
upvoted 1 times
1 month, 1 week ago
AWS KMS automatically rotates AWS managed keys (SSE-S3) every year (approximately 365 days). You cannot enable or disable key rotation for
AWS managed keys.
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: A
This question is old and written when there was no default encryption on S3.
Choosing A because Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for
every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and
with no impact on performance.
upvoted 2 times
1 month, 2 weeks ago
Just created a bucket and it says: The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side
encryption with Amazon S3 managed keys (SSE-S3). With server-side encryption, Amazon S3 encrypts an object before saving it to disk and
decrypts it when you download the object. Encryption doesn't change the way that you access data as an authorized user. It only further
protects your data.
You can configure default encryption for a bucket. You can use either server-side encryption with Amazon S3 managed keys (SSE-S3) (the
default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).
upvoted 1 times
1 month, 2 weeks ago
B is correct answer.
KEYWORD: LEAST operational overhead and the encryption key must be automatically rotated every year
SSE-S3: cannot rotation.
Base on aws site: If you need more control over your keys, such as managing key rotation and access policy grants, you can choose to use server-
side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
upvoted 1 times
3 months ago
Selected Answer: B
Because of the chronology of the events and the operational overhead of maintaining the key rotation process I vote for B. With SSE KMS CMK +
enabling automatic key rotation every year you will suffice all the requirements.
upvoted 1 times
3 months ago
the question did ask about customer-managed keys so my answer is A..
upvoted 1 times
3 months, 1 week ago
Answer is A.
Why?
Server-side encryption protects data at rest. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key
itself with a key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data,
256-bit Advanced Encryption Standard (AES-256).
upvoted 1 times
3 months, 1 week ago
the encryption key must be automatically rotated every year --> SSE-S3 has default rotation which rotate regularly but for SSE-KMS it can be
enabled to rotate every year
upvoted 1 times
3 months, 1 week ago
Selected Answer: A
I would like to go for option A due to the least operational work.
upvoted 1 times
3 months, 1 week ago
I would like to say the Answer suppose to be A due to the least operation overhead.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: B
Because in Option - A : Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a key that it
rotates regularly.
Does this mean Amazon does not rotate the keys with which the objects are encrypted - rather the root key is the one that is rotated regularly
upvoted 3 times
4 months ago
Selected Answer: B
Option B allows me to set the auto rotation every year. SSE-S3 doesn't allow me to control *when* a key gets auto-rotated.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
upvoted 1 times
Topic 1
Question #203
The customers of a nance company request appointments with nancial advisors by sending text messages. A web application that runs on
Amazon EC2 instances accepts the appointment requests. The text messages are published to an Amazon Simple Queue Service (Amazon SQS)
queue through the web application. Another application that runs on EC2 instances then sends meeting invitations and meeting con rmation
email messages to the customers. After successful scheduling, this application stores the meeting information in an Amazon DynamoDB
database.
As the company expands, customers report that their meeting invitations are taking longer to arrive.
What should a solutions architect recommend to resolve this issue?
A. Add a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database.
B. Add an Amazon API Gateway API in front of the web application that accepts the appointment requests.
C. Add an Amazon CloudFront distribution. Set the origin as the web application that accepts the appointment requests.
D. Add an Auto Scaling group for the application that sends meeting invitations. Con gure the Auto Scaling group to scale based on the depth
of the SQS queue.
Correct Answer:
D
Highly Voted
6 months, 1 week ago
Selected Answer: D
Option D. Add an Auto Scaling group for the application that sends meeting invitations. Configure the Auto Scaling group to scale based on the
depth of the SQS queue.
To resolve the issue of longer delivery times for meeting invitations, the solutions architect can recommend adding an Auto Scaling group for the
application that sends meeting invitations and configuring the Auto Scaling group to scale based on the depth of the SQS queue. This will allow
the application to scale up as the number of appointment requests increases, improving the performance and delivery times of the meeting
invitations.
upvoted 8 times
Most Recent
1 day, 17 hours ago
Selected Answer: D
By adding an ASG for the application that sends meeting invitations and configuring it to scale based on the depth of the SQS, the system can
automatically adjust its capacity based on the number of pending messages in the queue. This ensures that the application can handle increased
message load and process the meeting invitations more efficiently, reducing the delay experienced by customers.
A. Adding a DynamoDB Accelerator (DAX) cluster in front of the DynamoDB database would improve read performance for DynamoDB, but it does
not directly address the issue of delayed meeting invitations.
B. Adding an API Gateway API in front of the web application that accepts the appointment requests may help with request handling and
management, but it does not directly address the issue of delayed meeting invitations.
C. Adding an CloudFront distribution with the web application as the origin would improve content delivery and caching, but it does not directly
address the issue of delayed meeting invitations.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D is the right Answer,
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: D
Agreed
upvoted 1 times
7 months ago
Selected Answer: D
ANswer d
upvoted 1 times
Community vote distribution
D (100%)
7 months ago
Selected Answer: D
D meets the requirements
upvoted 1 times
7 months ago
Selected Answer: D
Answer : D
upvoted 1 times
Topic 1
Question #204
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects
purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability
to manage ne-grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?
A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.
B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon
Athena to query the data. Use S3 policies to limit access.
C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake
Formation. Use Lake Formation access controls to limit access.
D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to
Amazon Redshift. Use Amazon Redshift access controls to limit access.
Correct Answer:
D
Highly Voted
6 months, 2 weeks ago
Answer : C keyword "manage-fine-grained"
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 11 times
1 week, 4 days ago
You can manage fine grained using redshift as well - https://aws.amazon.com/blogs/big-data/achieve-fine-grained-data-security-with-row-
level-access-control-in-amazon-redshift/
But, I believe the keyword to look for is "minimize operational overhead", which lakeformation does without duplicating much of the data.
Redshift is operational overhead and duplication of data. not sure why the answer is D. i vote C as well.
upvoted 1 times
Most Recent
1 day, 16 hours ago
Selected Answer: C
Lake Formation enables the creation of a secure and scalable data lake on AWS, allowing centralized access controls for both S3 and RDS data. By
using Lake Formation, the company can manage permissions effectively and integrate RDS data through the AWS Glue JDBC connection.
Registering the S3 in Lake Formation ensures unified access control. This solution reduces operational overhead while providing fine-grained
permissions management.
A. Directly writing purchase data to Amazon RDS with RDS access controls lacks comprehensive permissions management for both S3 and RDS
data.
B. Periodically copying data from RDS to S3 using Lambda and using AWS Glue and Athena for querying does not offer fine-grained permissions
management and introduces data synchronization complexities.
D. Creating an Redshift cluster and copying data from S3 and RDS to Redshift adds complexity and operational overhead without the flexibility of
Lake Formation's permissions management capabilities.
upvoted 1 times
4 days, 17 hours ago
Answer is C AWS Lake Formation provides a comprehensive solution for building and managing a data lake. It simplifies data ingestion,
organization, and access control. By creating a data lake using AWS Lake Formation, you can centralize and govern access to your data across
multiple sources.
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: C
Option C is right answer: https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html
upvoted 1 times
3 weeks, 6 days ago
Lake Formation helps you manage fine-grained access for internal and external customers from a centralized location and in a scalable way.
upvoted 1 times
Community vote distribution
C (100%)
5 months ago
https://docs.aws.amazon.com/lake-formation/latest/dg/access-control-overview.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: C
To me, the give-away was: "The company wants to make all the data available to various teams" - Data-Lake - All data in one place.
upvoted 4 times
5 months, 3 weeks ago
The correct answer is D.
The company uses all the data from various teams so that the teams can do their analysis.
Therefore, it is the best way to separately configure redshift for data warehousing and for all employees to connect to the redshift DB and perform
analysis tasks without burdening the operating DB (must minimize operational overhead).
upvoted 2 times
3 weeks, 2 days ago
I don't think that "periodically copy data from Amazon S3 and RDS to Redshift" minimize the operational overhead. The correct answer for me is
C
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
Manage fine-grained access control using AWS Lake Formation
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake
Formation. Use Lake Formation access controls to limit access.
To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS Lake
Formation. This will allow the company to centralize all the data in one place and use fine-grained access controls to manage access to the data.
To meet the requirements of the company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS Glue JDBC
connection to Amazon RDS, and register the S3 bucket in Lake Formation. The solutions architect can then use Lake Formation access controls to
limit access to the data. This solution will provide the ability to manage fine-grained permissions for the data and minimize operational overhead.
upvoted 3 times
1 month, 1 week ago
..................
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
a combination of the following 2 URLs I believe it is C
https://aws.amazon.com/lake-formation/
https://aws.amazon.com/blogs/big-data/manage-fine-grained-access-control-using-aws-lake-formation/
upvoted 1 times
6 months, 1 week ago
Option C is the right answer. Fine-grained access-control from different types of data sources is a Lakeformation usecase.
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: C
CCCCCCCCCCCC
upvoted 2 times
6 months, 3 weeks ago
Selected Answer: C
ANSWER IS OF COURSE C
upvoted 1 times
7 months ago
Selected Answer: C
I think the answer is C because the keyword here is "FINE GRAINED" which Lake Formation provides
upvoted 2 times
7 months ago
Selected Answer: C
answr c
upvoted 1 times
7 months ago
Selected Answer: C
Data lake is for complex data sources
upvoted 1 times
Topic 1
Question #205
A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An
administrator updates the website content infrequently and uses an SFTP client to upload new documents.
The company decides to host its website on AWS and to use Amazon CloudFront. The company’s solutions architect creates a CloudFront
distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront
origin.
Which solution will meet these requirements?
A. Create a virtual server by using Amazon Lightsail. Con gure the web server in the Lightsail instance. Upload website content by using an
SFTP client.
B. Create an AWS Auto Scaling group for Amazon EC2 instances. Use an Application Load Balancer. Upload website content by using an SFTP
client.
C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website
content by using the AWS CLI.
D. Create a public Amazon S3 bucket. Con gure AWS Transfer for SFTP. Con gure the S3 bucket for website hosting. Upload website content
by using the SFTP client.
Correct Answer:
C
1 day, 16 hours ago
Selected Answer: C
Hosting the website in a private S3 provides cost-effective and highly available storage for the static website content. By configuring a bucket
policy to allow access from a CloudFront OAI, the S3 can be securely accessed only through CloudFront. This ensures that the website content is
served through CloudFront while keeping the S3 private. Uploading website content using the AWS CLI allows for easy and efficient content
management.
A. Hosting the website on an Lightsail virtual server would introduce additional management overhead and costs compared to using S3 directly for
static content hosting.
B. Using an AWS ASG with EC2 instances and an ALB is not necessary for serving static website content. It would add unnecessary complexity and
cost.
D. While using AWS Transfer for SFTP allows for SFTP uploads, it introduces additional costs and complexity compared to directly uploading
content to an S3 using the AWS CLI. Additionally, hosting the website content in a public S3 may not be desirable from a security standpoint.
upvoted 1 times
1 month ago
Selected Answer: D
D - SFTP client to upload new documents.
upvoted 1 times
4 months, 1 week ago
Selected Answer: C
AWS transfer is a cost and doesn't mention using CloudFront
https://aws.amazon.com/aws-transfer-family/pricing/
upvoted 4 times
4 months, 2 weeks ago
Selected Answer: C
If you don't want to disable block public access settings for your bucket but you still want your website to be public, you can create a Amazon
CloudFront distribution to serve your static website. For more information, see Use an Amazon CloudFront distribution to serve a static website in
the Amazon Route 53 Developer Guide.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
upvoted 1 times
5 months ago
Selected Answer: C
I at first thought D but it is in fact C because
Community vote distribution
C (69%)
D (31%)
"D: Create a public Amazon S3 bucket. Configure AWS Transfer for SFTP. Configure the S3 bucket for website hosting. Upload website content by
using the SFTP client." questions says that the company has decided to use Amazon Cloudfront and this answer does not reference using CF and
setting S3 as the Origin
"C. Create a private Amazon S3 bucket. Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI). Upload website
content by using the AWS CLI." - mentions CF and the origin and the AWS CLI does infact support transfer by SFTP (which was the part I originally
doubted but this link evidences that it does:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe-server.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
Option C, creating a private Amazon S3 bucket and using an S3 bucket policy to allow access from a CloudFront origin access identity (OAI), would
not be the most cost-effective solution. While it would allow the company to use Amazon S3 for storage, it would also require additional setup and
maintenance of the OAI, which would add additional cost. Additionally, this solution would not allow the use of SFTP client for uploading content
which is the current method used by the company.
upvoted 1 times
5 months, 2 weeks ago
The Answer is C
https://medium.com/aws-poc-and-learning/how-to-access-s3-hosted-website-via-cloudfront-using-oai-origin-access-identity-720ad7c57f15
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Option C is a better choice than D for following reasons:
(1) Cost effective: data transfer is cheaper for Cloudfront than directly from S3 bucket
(2) Resilient: recovery from failures. Having a Cloudfront distribution and making S3 bucket policy only for Cloudfront. ie. private bucket (with OAI
for access) hardens and betters resiliency.
upvoted 3 times
5 months, 3 weeks ago
Selected Answer: C
If you don't do extra setup in AWS, you can not use SFTP connecting to it, so D is not the case
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
s3 + Cloudfront. In this case, S3 does not need to be public.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: D
The most cost-effective and resilient solution for hosting a website on AWS with CloudFront is to create a public Amazon S3 bucket, configure AWS
Transfer for SFTP, configure the S3 bucket for website hosting, and then upload website content using the SFTP client.
Option A involves using Amazon Lightsail to create a virtual server, which may not be the most cost-effective solution compared to using S3.
Option B involves using an Auto Scaling group with EC2 instances and an Application Load Balancer, which may be more expensive and complex
than using S3. Option C involves creating a private S3 bucket, which may not allow CloudFront to access the website content.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
KEYWORD: most cost-effective and resilient architecture
Option D: Creating a public Amazon S3 bucket, configuring AWS Transfer for SFTP, configuring the S3 bucket for website hosting, and uploading
website content by using the SFTP client will meet these requirements with the most cost-effective and resilient architecture.
Configuring AWS Transfer for SFTP allows the company to securely upload content to the S3 bucket using the SFTP client, which the administrator
is already familiar with. This eliminates the need to change the administrator’s workflow or learn new tools.
upvoted 1 times
5 months, 1 week ago
https://medium.com/aws-poc-and-learning/how-to-access-s3-hosted-website-via-cloudfront-using-oai-origin-access-identity-720ad7c57f15
upvoted 1 times
6 months, 1 week ago
Option C: Creating a private Amazon S3 bucket and using an S3 bucket policy to allow access from a CloudFront origin access identity (OAI) is
not a suitable solution because it does not allow the administrator to use an SFTP client to upload website content. The administrator would
need to use the AWS CLI or a different tool to upload content to the S3 bucket, which would require a change in the administrator’s workflow.
upvoted 1 times
5 months, 2 weeks ago
The requirements are "cost-effective and resilient architecture", and nothing about least operational overhead so your concerns are not valid.
Cloudfront makes it resilient and cuts costs, so far more relevant.
upvoted 1 times
6 months ago
. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin.
Are you sure about D?
upvoted 1 times
5 months, 1 week ago
An administrator updates the website content infrequently and uses an SFTP client to upload new documents.
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Answer is C only,Bucket doesn't need to be public when using cloudfront.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
upvoted 1 times
5 months, 2 weeks ago
Yes " If your use case requires the block public access settings to be turned on, use the REST API endpoint as the origin. Then, restrict access by
an origin access control (OAC) or origin access identity (OAI)."
upvoted 1 times
6 months, 1 week ago
C is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
Option C is right answer as company has already decided to use Cloudfront.
Option D is not correct as it does not use Cloudfront.
As long as there is way to upload the content using CLI, it is OK as updates are not very frequent.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
According to https://www.pass4future.com/questions/amazon/saa-c02
upvoted 2 times
6 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/81299-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #206
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were
created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API
operation is called within the company’s account.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.
B. Con gure AWS CloudTrail with an Amazon Simple Noti cation Service (Amazon SNS) noti cation that occurs when updated logs are sent to
Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call. Con gure the target as an Amazon Simple
Noti cation Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.
D. Con gure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail logs. Create an AWS Lambda
function to send an alert to an Amazon Simple Noti cation Service (Amazon SNS) topic when a CreateImage API call is detected.
Correct Answer:
D
Highly Voted
7 months ago
Selected Answer: C
I'm team C.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-
events.html#:~:text=For%20example%2C%20you%20can%20create%20an%20EventBridge%20rule%20that%20detects%20when%20the%20AMI%2
0creation%20process%20has%20completed%20and%20then%20invokes%20an%20Amazon%20SNS%20topic%20to%20send%20an%20email%20n
otification%20to%20you.
upvoted 13 times
5 months, 2 weeks ago
That link contains the exact use case and explains how C can be used.
Option B requires you to send logs to S3 and use Athena, 2 additional services that are not required, so this does not meet the "LEAST
operational overhead?" requirement, since these are extra services requiring management.
upvoted 2 times
Highly Voted
6 months ago
Selected Answer: A
Why not A? API calls are already logged in Cloudtrail.
upvoted 9 times
Most Recent
1 day, 16 hours ago
EventBridge (formerly CloudWatch Events) is a fully managed event bus service that allows you to monitor and respond to events within your AWS
environment. By creating an EventBridge rule specifically for the CreateImage API call, you can easily detect and capture this event. Configuring the
target as an SNS topic allows you to send an alert whenever a CreateImage API call occurs. This solution requires minimal operational overhead as
EventBridge and SNS are fully managed services.
A. While using an Lambda to query CloudTrail logs and send an alert can achieve the desired outcome, it introduces additional operational
overhead compared to using EventBridge and SNS directly.
B. Configuring CloudTrail with an SNS notification and using Athena to query on CreateImage API calls would require more setup and maintenance
compared to using EventBridge and SNS.
D. Configuring an SQS FIFO queue as a target for CloudTrail logs and using a function to send an alert to an SNS topic adds unnecessary
complexity to the solution and increases operational overhead. Using EventBridge and SNS directly is a simpler and more efficient approach.
upvoted 1 times
4 days, 17 hours ago
D makes no sense, FIFO is not required, SQS is not used for sending notifications...C all the way
upvoted 1 times
1 week, 1 day ago
Selected Answer: D
As the link shared by who chose C, it said EventBrigde can catch event ( available/failed/deregistered). In this doc, CreateImage not distinct with
CopyImage/RegisterImage/CreateRestoreImageTask.
So It not C.
It not B because it very overhead.
Community vote distribution
C (71%)
A (18%)
10%
And the question say "whenever', means quick as possible, so It not A.
The right answer is D
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-
events.html#:~:text=For%20example%2C%20you%20can%20create%20an
%20EventBridge%20regla%20que%20detecta%20cuando%20el%20creación%20AMI%20proceso%20ha%20completado%20y%20entonces%20invo
ca%20un%20Amazon%20SNS%20tema%20para%20enviar%20un%20correoelectrónico%20notificación%20para%20usted
upvoted 1 times
3 months, 1 week ago
Selected Answer: C
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html
upvoted 2 times
3 months ago
Option C makes sense here.
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: C
LEAST operational overhead
upvoted 1 times
6 months, 1 week ago
Selected Answer: C
The correct solution is Option C. Creating an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call and configuring
the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected will meet the
requirements with the least operational overhead.
Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications,
integrated Software as a Service (SaaS) applications, and AWS services. By creating an EventBridge rule for the CreateImage API call, the company
can set up alerts whenever this operation is called within their account. The alert can be sent to an SNS topic, which can then be configured to send
notifications to the company's email or other desired destination.
This solution does not require the company to create a Lambda function or query CloudTrail logs, which makes it the most cost-effective and
efficient option.
upvoted 7 times
6 months, 1 week ago
Selected Answer: C
Option C is right answer.
Eventbridge has integration with CloudTrail as source of events (using pipes).
Option D is incorrect as Cloudtrail can not automatically send its API event logs to SQS.
upvoted 1 times
6 months, 2 weeks ago
C
Option B is not correct because it involves using Amazon Athena to query AWS CloudTrail logs, which can be a time-consuming and error-prone
process. Additionally, it requires the company to manage the underlying infrastructure for Amazon Athena, which adds operational overhead.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: C
answer is c
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
it is C
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: B
The Goal is to trigger AMI create event from API Call, for me B because C mentioned EventBridge will only focuses on state change (available,
failed, deregistered) so we don't need these details according the question.
upvoted 1 times
6 months, 2 weeks ago
Please read documentation:
" you can create an EventBridge rule that detects when the AMI creation process has completed and then invokes an Amazon SNS topic to send
an email notification to you."
So it do send event when AMI is created, so C is correct.
upvoted 4 times
5 months, 3 weeks ago
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitor-ami-events.html
upvoted 1 times
6 months, 3 weeks ago
Selected Answer: C
Option B and C seems right but "LEAST operational overhead" eliminates B. So, C is the right answer.
upvoted 1 times
7 months ago
Selected Answer: B
B - https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html
upvoted 1 times
7 months ago
typo - it's C
upvoted 2 times
7 months ago
why it is not D? I think this is the correct answer
upvoted 2 times
6 months, 1 week ago
D is incorrect because it requires the company to configure an SQS FIFO queue as a target for CloudTrail logs, create a Lambda function, and
send an alert to an SNS topic.
This option is more complex and requires more operational overhead than creating an EventBridge rule.
Hence, the correct solution is Option C.
upvoted 1 times
Topic 1
Question #207
A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate
microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes
Amazon DynamoDB to store user requests before dispatching them to the processing microservices.
The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is
losing user requests.
What should a solutions architect do to address this issue without impacting existing users?
A. Add throttling on the API Gateway with server-side throttling limits.
B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB.
C. Create a secondary index in DynamoDB for the table with the user requests.
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
Correct Answer:
D
1 day, 16 hours ago
Selected Answer: D
This solution can handle bursts of incoming requests more effectively and reduce the chances of losing requests due to DynamoDB capacity
limitations. The Lambda can be configured to retrieve messages from the SQS and write them to DynamoDB at a controlled rate, allowing
DynamoDB to handle the requests within its provisioned capacity. This approach provides resilience to spikes in traffic and ensures that requests
are not lost during periods of high demand.
A. It limits can help control the request rate, but it may lead to an increase in errors and affect the user experience. Throttling alone may not be
sufficient to address the availability issues and prevent the loss of requests.
B. It can improve read performance but does not directly address the availability issues and loss of requests. It focuses on optimizing read
operations rather than buffering writes.
C. It may help with querying the user requests efficiently, but it does not directly solve the availability issues or prevent the loss of requests. It is
more focused on data retrieval rather than buffering writes.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
DAX is for reads
upvoted 1 times
3 weeks ago
DAX is not ideal for the following types of applications:
Applications that require strongly consistent reads (or that cannot tolerate eventually consistent reads).
Applications that do not require microsecond response times for reads, or that do not need to offload repeated read activity from underlying
tables.
Applications that are write-intensive, or that do not perform much read activity.
Applications that are already using a different caching solution with DynamoDB, and are using their own client-side logic for working with that
caching solution.
upvoted 1 times
4 months ago
Selected Answer: D
The key here is "Losing user requests" sqs messages will stay in the queue until it has been processed
upvoted 1 times
5 months ago
Selected Answer: D
D because SQS is the cheapest way. First 1,000,000 requests are free each month.
Question states: "The company provisioned as much DynamoDB throughput as its budget allows"
Community vote distribution
D (95%)
5%
upvoted 3 times
6 months ago
Selected Answer: D
D is more likely to fix this problem as SQS queue has the ability to wait (buffer) for consumer to notify that the request or message has been
processed.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
To address the issue of lost user requests and improve the availability of the API, the solutions architect should use the Amazon Simple Queue
Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB. Option D (correct answer)
By using an SQS queue and Lambda, the solutions architect can decouple the API front end from the processing microservices and improve the
overall scalability and availability of the system. The SQS queue acts as a buffer, allowing the API front end to continue accepting user requests
even if the processing microservices are experiencing high workloads or are temporarily unavailable. The Lambda function can then retrieve
requests from the SQS queue and write them to DynamoDB, ensuring that all user requests are stored and processed. This approach allows the
company to scale the processing microservices independently from the API front end, ensuring that the API remains available to users even during
periods of high demand.
upvoted 4 times
6 months, 1 week ago
Selected Answer: B
I would go to B : https://aws.amazon.com/es/blogs/database/amazon-dynamodb-accelerator-dax-a-read-throughwrite-through-cache-for-
dynamodb/
upvoted 1 times
6 months, 1 week ago
D is correct answer
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.
upvoted 1 times
6 months, 1 week ago
Selected Answer: D
Option D is right answer
upvoted 1 times
6 months, 2 weeks ago
Why not B? DAX.
"When you’re developing against DAX, instead of pointing your application at the DynamoDB endpoint, you point it at the DAX endpoint, and DAX
handles the rest. As a read-through/write-through cache, DAX seamlessly intercepts the API calls that an application normally makes to DynamoDB
so that both read and write activity are reflected in the DAX cache."
https://aws.amazon.com/es/blogs/database/amazon-dynamodb-accelerator-dax-a-read-throughwrite-through-cache-for-dynamodb/
upvoted 1 times
2 months, 2 weeks ago
It is not DAX because of the company's budget restriction associated with the DynamoDB. This is a requirement in the question. DynamoDB
charges for DAX capacity by the hour and your DAX instances run with no long-term commitments. Please refer to:
https://aws.amazon.com/dynamodb/pricing/provisioned/#.E2.80.A2_DynamoDB_Accelerator_.28DAX.29
upvoted 2 times
6 months, 3 weeks ago
yeah I though the answer is also DAX.
upvoted 1 times
7 months ago
Selected Answer: D
Using SQS should be the answer.
upvoted 3 times
6 months, 3 weeks ago
Why not DAX? Could somebody explain?
upvoted 1 times
6 months, 1 week ago
Using DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB, may improve the write performance of the system, but it
does not provide the same level of scalability and availability as using an SQS queue and Lambda.
Hence, Option B is incorrect.
upvoted 1 times
6 months, 3 weeks ago
key noted issue is "losing user requests" which is resolved with SQS
upvoted 5 times
6 months, 3 weeks ago
DAX helps in reducing the read loads from DynamoDB, here we need a solution to handle write requests, which is well handled by SQS and
Lamda to buffer writes on DynamoDB.
upvoted 4 times
7 months ago
Selected Answer: D
Answer d
upvoted 2 times
7 months ago
Answer : D
upvoted 1 times
Topic 1
Question #208
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company must ensure that no API calls and no data
are routed through public internet routes. Only the EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?
A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy to the S3 bucket
to only allow the EC2 instance’s IAM role for access.
B. Create a gateway VPC endpoint for Amazon S3 in the Availability Zone where the EC2 instance is located. Attach appropriate security
groups to the endpoint. Attach a resource policy to the S3 bucket to only allow the EC2 instance’s IAM role for access.
C. Run the nslookup tool from inside the EC2 instance to obtain the private IP address of the S3 bucket’s service API endpoint. Create a route
in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow the
EC2 instance’s IAM role for access.
D. Use the AWS provided, publicly available ip-ranges.json le to obtain the private IP address of the S3 bucket’s service API endpoint. Create a
route in the VPC route table to provide the EC2 instance with access to the S3 bucket. Attach a resource policy to the S3 bucket to only allow
the EC2 instance’s IAM role for access.
Correct Answer:
B
Highly Voted
7 months ago
Selected Answer: A
I think answer should be A and not B.
as we cannot "Attach a security groups to a gateway endpoint."
upvoted 11 times
6 months, 1 week ago
It's possible:
https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
upvoted 2 times
2 months ago
No, it’s not
upvoted 2 times
1 week, 4 days ago
Gateway endpoint must be used as a target in a route table does not use security groups.
upvoted 1 times
3 weeks ago
Create a security group that allows the resources in your VPC to communicate with the endpoint network interfaces for the VPC endpoint.
To ensure that tools such as the AWS CLI can make requests over HTTPS from resources in the VPC to the AWS service, the security group
must allow inbound HTTPS traffic.
For Security groups, select the security groups to associate with the endpoint network interfaces for the VPC endpoint. By default, we
associate the default security group for the VPC.
upvoted 1 times
Highly Voted
6 months, 1 week ago
Selected Answer: B
The correct solution to meet the requirements is Option B. A gateway VPC endpoint for Amazon S3 should be created in the Availability Zone
where the EC2 instance is located. This will allow the EC2 instance to access the S3 bucket directly, without routing through the public internet. The
endpoint should also be configured with appropriate security groups to allow access to the S3 bucket. Additionally, a resource policy should be
attached to the S3 bucket to only allow the EC2 instance's IAM role for access.
upvoted 9 times
6 months, 1 week ago
Option A is incorrect because an interface VPC endpoint for Amazon S3 would not provide a direct connection between the EC2 instance and
the S3 bucket.
Option C is incorrect because using the nslookup tool to obtain the private IP address of the S3 bucket's service API endpoint would not
Community vote distribution
A (65%)
B (35%)
provide a secure connection between the EC2 instance and the S3 bucket.
Option D is incorrect because using the ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not a
secure method to connect the EC2 instance to the S3 bucket.
upvoted 1 times
4 months, 1 week ago
There are two types VPC Endpoint:
Gateway endpoint
Interface endpoint
A Gateway endpoint:
1) Helps you to securely connect to Amazon S3 and DynamoDB
2) Endpoint serves as a target in your route table for traffic
3) Provide access to endpoint (endpoint, identity and resource policies)
An Interface endpoint:
1) Help you to securely connect to AWS services EXCEPT FOR Amazon S3 and DynamoDB
2) Powered by PrivateLink (keeps network traffic within AWS network)
3) Needs a elastic network interface (ENI) (entry point for traffic)
upvoted 11 times
5 months, 3 weeks ago
An interface VPC endpoint does provide a direct connection between the EC2 instance and the S3 bucket. It enables private communication
between instances in your VPC and resources in other services without requiring an internet gateway, a NAT device, or a VPN connection.
Option A , which recommends creating an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located and
attaching a resource policy to the S3 bucket to only allow the EC2 instance's IAM role for access, is the correct solution for the given
scenario. It meets the requirement to ensure that no API calls and no data are routed through public internet routes and that only the EC2
instance can have access to upload data to the S3 bucket.
upvoted 2 times
4 months, 3 weeks ago
In support, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-
for-s3
upvoted 3 times
Most Recent
1 day, 8 hours ago
Selected Answer: A
By creating an interface VPC endpoint for Amazon S3 in the same subnet as the EC2 instance, the data transfer between the EC2 instance and S3
can occur privately within the Amazon network, without traversing the public internet. This ensures secure and direct communication between the
EC2 instance and S3. Attaching a resource policy to the S3 bucket that allows access only from the IAM role associated with the EC2 instance
further restricts access to only the authorized instance.
B. Creating a gateway VPC endpoint for Amazon S3 would still involve routing through the public internet, which is not desired in this case.
C. Running nslookup or creating a specific route in the VPC route table does not provide the desired level of security and privacy, as the traffic may
still traverse public internet routes.
D. Using the publicly available ip-ranges.json file to obtain the private IP address of the S3 bucket's service API endpoint is not a recommended
approach, as IP addresses can change over time, and it does not provide the same level of security as using VPC endpoints.
upvoted 1 times
2 weeks, 1 day ago
Selected Answer: A
security group cannot be associated with Gateway Endpoint. so, the answer is A.
upvoted 1 times
3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html
Create a security group that allows the resources in your VPC to communicate with the endpoint network interfaces for the VPC endpoint. To
ensure that tools such as the AWS CLI can make requests over HTTPS from resources in the VPC to the AWS service, the security group must allow
inbound HTTPS traffic.
For Security groups, select the security groups to associate with the endpoint network interfaces for the VPC endpoint. By default, we associate the
default security group for the VPC.
upvoted 2 times
3 weeks, 6 days ago
Selected Answer: B
You cannot use Interface Endpoint for S3 or DynamoDB. It has to bee the Gateway endpoints. See the diagram:
https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html
upvoted 2 times
1 month ago
It's absolutely A, security group can't be attached to the VPC Endpoint gateway
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: A
ChatGPT:
Option B is not the best solution because it involves creating a gateway VPC endpoint for Amazon S3. Gateway VPC endpoints only support
Amazon S3 and DynamoDB and do not support private DNS. This means that requests to the S3 bucket would still be routed over the internet. On
the other hand, an interface VPC endpoint, as described in option A, supports private DNS and allows traffic between the VPC and the service to
remain within the Amazon network. This ensures that no API calls and no data are routed through public internet routes.
upvoted 2 times
1 month ago
that answer is completely wrong!!!! Don't rely on ChatGPT, use the documentation available and think yourself. Reference:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html (look into 'Considerations' section)
upvoted 1 times
2 months, 3 weeks ago
Selected Answer: B
Answer: B
What is VPC gateway endpoint
Consider a scenario where you have to access S3 from your EC2 instance in a public subnet. As the subnet has an internet gateway attached, the
traffic to S3 will go through the public internet. However, the problem arises if your instance is in a private subnet and does not have any NAT
gateway/instance attached or you cannot afford charges of NAT gateway. Currently, AWS S3 and DynamoDB are the only services supported by
gateway endpoints. Using Gateway endpoints does not incur any data processing or hourly charges.
https://digitalcloud.training/vpc-interface-endpoint-vs-gateway-endpoint-in-aws/
upvoted 2 times
4 months ago
Selected Answer: A
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3. Gateway endpoints
use public s3 ip addresses
upvoted 2 times
4 months, 3 weeks ago
Answer is A is correct. U cannot attaach security group to Gateway Endpoint. Note that Gateway Endpoint do not create ENI in your subnet, hence
no Security group can be attached. You can create IAM policy to allow only IAM Role to access to AWS.
(https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/)
upvoted 2 times
4 months, 3 weeks ago
Selected Answer: A
A - Because we can not configure a SG on an gateway endpoint
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
Interface Endpoint use private IP adresses from VPC to acces S3. IE use private AWS PrivateLink
upvoted 1 times
5 months, 3 weeks ago
Selected Answer: A
The correct answer is A. Create an interface VPC endpoint for Amazon S3 in the subnet where the EC2 instance is located. Attach a resource policy
to the S3 bucket to only allow the EC2 instance’s IAM role for access.
A VPC endpoint allows you to create a private connection between your VPC and another service without requiring access over the internet, a NAT
device, or a VPN connection. An interface VPC endpoint is a network interface that you can create in your VPC that serves as an entry point for
incoming traffic. You can use an interface VPC endpoint to access resources in the service, such as an Amazon S3 bucket.
upvoted 1 times
5 months, 3 weeks ago
Attaching a resource policy to the S3 bucket allows you to specify which IAM entities are allowed to access the bucket and what actions they
can perform on the bucket and its contents. In this case, you can specify that only the EC2 instance’s IAM role has access to the bucket.
Option B is incorrect because a gateway VPC endpoint is used to access resources outside of the VPC, such as an on-premises data center. It is
not used to access resources within the VPC.
Option C is incorrect because the nslookup tool is used to find the IP address associated with a domain name. It is not used to obtain the
private IP address of the S3 bucket’s service API endpoint.
Option D is incorrect because the ip-ranges.json file contains the IP address ranges for all AWS services. It does not contain the private IP
address of the S3 bucket’s service API endpoint. Additionally, using a publicly available IP address range to create a route in the VPC route table
would not meet the requirement to ensure that no data is routed through public internet routes.
upvoted 1 times
4 months, 3 weeks ago
You can access Amazon S3 from your VPC using gateway VPC endpoints. After you create the gateway endpoint, you can add it as a target in
your route table for traffic destined from your VPC to Amazon S3.
Reason for B is absolutely wrong
upvoted 1 times
5 months, 2 weeks ago
Even Interface VPC endpoint can be use to access service such as S3 or SNS outside of the VPC. The reasoning in Option B is not correct.
upvoted 1 times
6 months ago
Selected Answer: A
From what I understand, you can create security groups for interface endpoints because they use an ENI, but you cannot create security groups for
gateway endpoints as they do not use ENIs. So I would go with A
upvoted 3 times
6 months, 1 week ago
B is wrong as it is not created in just an AZ, but specifically in a VPC
upvoted 1 times
6 months, 1 week ago
Selected Answer: A
Both (Gateway and Interface) VPC endpoints allow to access S3 privately over AWS network.
VPC gateway usually is preferred when private access to S# is needed form EC2 in some VPC, because it free of charge, easy to set up and scalable.
To setup properly access via gateway VPC endpoint is required to edit route tables, but in answer choice it's not mentioned, so without it
connection will not work.
So by elimination we may select A as correct answer.
upvoted 3 times
5 months, 2 weeks ago
Similarly to enable interface VPC endpoint, the Security Group must be attached, which is not mentioned in Option A. Actually both interface
and gateway VPC endpoints can access AWS service outside of VPC.
upvoted 1 times
Topic 1
Question #209
A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2
On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently
throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session
data management. The company is willing to make changes to code if needed.
What should the solutions architect do to ensure that the architecture supports distributed session data management?
A. Use Amazon ElastiCache to manage and store session data.
B. Use session a nity (sticky sessions) of the ALB to manage session data.
C. Use Session Manager from AWS Systems Manager to manage the session.
D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
Correct Answer:
A
Highly Voted
6 months, 1 week ago
Selected Answer: A
The correct answer is A. Use Amazon ElastiCache to manage and store session data.
In order to support distributed session data management in this scenario, it is necessary to use a distributed data store such as Amazon
ElastiCache. This will allow the session data to be stored and accessed by multiple EC2 instances across multiple Availability Zones, which is
necessary for a scalable and highly available architecture.
Option B, using session affinity (sticky sessions) of the ALB, would not be sufficient because this would only allow the session data to be stored on a
single EC2 instance, which would not be able to scale across multiple Availability Zones.
Options C and D, using Session Manager and the GetSessionToken API operation in AWS STS, are not related to session data management and
would not be appropriate solutions for this scenario.
upvoted 16 times
Most Recent
1 day, 6 hours ago
ElastiCache is a managed in-memory data store service that is well-suited for managing session data in a distributed architecture. It provides high-
performance, scalable, and durable storage for session data, allowing multiple EC2 instances to access and share session data seamlessly. By using
ElastiCache, the application can offload the session management workload from the EC2 instances and leverage the distributed caching capabilities
of ElastiCache for improved scalability and performance.
Option B, using session affinity (sticky sessions) of the ALB, is not the best choice for distributed session data management because it ties each
session to a specific EC2 instance. As the instances scale up and down frequently, it can lead to uneven load distribution and may not provide
optimal scalability.
Options C and D are not applicable for managing session data. AWS Systems Manager's Session Manager is primarily used for secure remote shell
access to EC2 instances, and the AWS STS GetSessionToken API operation is used for temporary security credentials and not session data
management.
upvoted 1 times
1 week, 2 days ago
Selected Answer: A
A. Use Amazon ElastiCache to manage and store session data.
- Correct. - Session data is managed at the application-layer, and a distributed cache should be used
B. Use session affinity (sticky sessions) of the ALB to manage session data.
- Wrong. This tightly couples the individual EC2 instances to the session data, and requires additional logic in the ALB. When scale-in happens, the
session data stored on individual EC2 instances is destroyed
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
correct answer is A as instance are getting up and down.
upvoted 1 times
6 months, 1 week ago
야
근데
210
문제는
어딧냐
..?
upvoted 1 times
Community vote distribution
A (100%)
4 months ago
https://www.examtopics.com/discussions/amazon/view/94992-exam-aws-certified-solutions-architect-associate-saa-c03/
여기
임마
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
Amazon ElastiCache to manage and store session data.
upvoted 1 times
6 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/46412-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
6 months, 2 weeks ago
A
Amazon ElastiCache to manage and store session data. This solution will allow the application to automatically scale across multiple Availability
Zones without losing session data, as the session data will be stored in a cache that is accessible from any EC2 instance. Additionally, using Amazon
ElastiCache will enable the company to easily manage and scale the cache as needed, without requiring any changes to the application code.
Option C is not correct because,Session Manager from AWS Systems Manager will not provide the necessary support for distributed session data
management. Session Manager is a tool for managing and tracking sessions on EC2 instances, but it does not provide a mechanism for storing and
managing session data in a distributed environment.
upvoted 3 times
7 months ago
better justification found here...
https://www.examtopics.com/discussions/amazon/view/46412-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
7 months ago
why not C?
upvoted 1 times
7 months ago
Selected Answer: A
ALB sticky session can keep request accessing to the same backend application. But it says "distributed session management" and company "will to
change code", so I think A is better
upvoted 3 times
7 months ago
Selected Answer: A
Answer : A
upvoted 1 times
Topic 1
Question #210
A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing
scaling problems during peak tra c hours. The current architecture includes the following:
• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application
• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to ful ll orders
The order collection process occurs quickly, but the order ful llment process can take longer. Data must not be lost because of a scaling event.
A solutions architect must ensure that the order collection process and the order ful llment process can both scale properly during peak tra c
hours. The solution must optimize utilization of the company’s AWS resources.
Which solution meets these requirements?
A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Con gure each Auto Scaling group’s
minimum capacity according to peak workload values.
B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling groups. Con gure a CloudWatch alarm to invoke
an Amazon Simple Noti cation Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order ful llment. Con gure the
EC2 instances to poll their respective queue. Scale the Auto Scaling groups based on noti cations that the queues send.
D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order ful llment. Con gure the
EC2 instances to poll their respective queue. Create a metric based on a backlog per instance calculation. Scale the Auto Scaling groups
based on this metric.
Correct Answer:
C
16 hours, 52 minutes ago
SQS auto-scales by default so I don't think we need to mention it explicitly. Option D should be correct.
upvoted 1 times
18 hours, 16 minutes ago
Selected Answer: D
A. This approach focuses solely on CPU utilization, which may not accurately reflect the scaling needs of the order collection and fulfillment
processes. It does not address the need for decoupling and reliable message processing.
B. While this approach incorporates alarms to trigger additional Auto Scaling groups, it lacks the decoupling and reliable message processing
provided by using SQS queues. It may lead to inefficient scaling and potential data loss.
C. Although using SQS queues is a step in the right direction, scaling solely based on queue notifications may not provide optimal resource
utilization. It does not consider the backlog per instance and does not allow for fine-grained control over scaling.
Overall, option D, which involves using SQS queues for order collection and fulfillment, creating a metric based on backlog per instance calculation,
and scaling the Auto Scaling groups accordingly, is the most suitable solution to address the scaling problems while optimizing resource utilization
and ensuring reliable message processing.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
C is incorrect. "based on notifications that the queues send" SQS does not send notification
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: C
D is not correct because it requires more operational overhead and complexity than option C which is simpler and more cost-effective. It uses the
existing queue metrics that are provided by Amazon SQS and does not require creating or publishing any custom metrics. You can use target
tracking scaling policies to automatically maintain a desired backlog per instance ratio without having to calculate or monitor it yourself.
upvoted 2 times
3 months, 3 weeks ago
Community vote distribution
D (76%)
C (24%)
Selected Answer: D
When the backlog per instance reaches the target value, a scale-out event will happen. Because the backlog per instance is already 150 messages
(1500 messages / 10 instances), your group scales out, and it scales out by five instances to maintain proportion to the target value.
Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the
length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's running capacity, which for
an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
upvoted 4 times
5 months ago
Selected Answer: D
Scale based on queue length
upvoted 2 times
5 months, 1 week ago
answer is D.
read question again
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
The number of instances in your Auto Scaling group can be driven by how long it takes to process a message and the acceptable amount of
latency (queue delay).
The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
D is correct
upvoted 1 times
5 months, 1 week ago
C
Need to Auto-
Scale Queue of SQS
upvoted 1 times
5 months ago
Why would you scale based on " Scale the Auto Scaling groups based on notifications that the queues send."? Would it not make 1000 times
more sense to scale base don queue length "Create a metric based on a backlog per instance calculation"?
upvoted 3 times
5 months, 1 week ago
Selected Answer: D
I think its D as here we are creating new metric to calculate load on each EC2 instance.
upvoted 2 times
5 months, 1 week ago
I think its D as here we are creating new metric to calculate load on each EC2 instance.
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
C is incorrect as SQS doesn't send notifications and needs to be polled by the consumers
upvoted 2 times
5 months, 2 weeks ago
I think, D
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
ı think c ,but ı m not sure ı think both of solve problem
upvoted 1 times
5 months ago
No they don't. How exactly would you scale based on a queue sending a message? Scale up when it sends a message? Scale up every time it
sends a message? This takes no account of how quickly messages are processed.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
I think C
Topic 1
Question #211
A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS,
Amazon Simple Noti cation Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company
resources are tagged with a tag name of “application” and a value that corresponds to each application. A solutions architect must provide the
quickest solution for identifying all of the tagged components.
Which solution meets these requirements?
A. Use AWS CloudTrail to generate a list of resources with the application tag.
B. Use the AWS CLI to query each service across all Regions to report the tagged components.
C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.
D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.
Correct Answer:
D
15 hours, 35 minutes ago
Selected Answer: D
A is not the quickest solution because CloudTrail primarily focuses on capturing and logging API activity. While it can provide information about
resource changes, it may not provide a comprehensive and quick way to identify all the tagged components across multiple services and Regions.
B involves manually querying each service using the AWS CLI, which can be time-consuming and cumbersome, especially when dealing with
multiple services and Regions. It is not the most efficient solution for quickly identifying tagged components.
C is focused on analyzing logs rather than directly identifying the tagged components. While CloudWatch Logs Insights can help extract
information from logs, it may not provide a straightforward and quick way to gather a consolidated list of all tagged components across different
services and Regions.
D is the quickest solution as it leverages the Resource Groups Tag Editor, which is specifically designed for managing and organizing resources
based on tags. It offers a centralized and efficient approach to generate a report of tagged components across multiple services and Regions.
upvoted 1 times
1 month ago
Selected Answer: D
A solutions architect can provide the quickest solution for identifying all of the tagged components by running running a query with the AWS
Resource Groups Tag Editor to report on the resources globally with the application tag, hence the option D is right answer.
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: D
The answer is D
upvoted 2 times
5 months ago
Selected Answer: D
D
가
맞습니다
.
upvoted 2 times
5 months ago
Selected Answer: D
https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html
upvoted 2 times
5 months, 1 week ago
Answer is D.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
validated
https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html
upvoted 1 times
Community vote distribution
D (100%)
5 months, 2 weeks ago
Selected Answer: D
D is correct
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/51352-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #212
A company needs to export its database once a day to Amazon S3 for other teams to access. The exported object size varies between 2 GB and 5
GB. The S3 access pattern for the data is variable and changes rapidly. The data must be immediately available and must remain accessible for up
to 3 months. The company needs the most cost-effective solution that will not increase retrieval time.
Which S3 storage class should the company use to meet these requirements?
A. S3 Intelligent-Tiering
B. S3 Glacier Instant Retrieval
C. S3 Standard
D. S3 Standard-Infrequent Access (S3 Standard-IA)
Correct Answer:
A
Highly Voted
5 months, 1 week ago
Selected Answer: A
S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent Access
tier and after 90 days of no access to the Archive Instant Access tier.
upvoted 9 times
1 month, 3 weeks ago
https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/
upvoted 1 times
Most Recent
15 hours, 33 minutes ago
Selected Answer: A
Option A is designed for objects with changing access patterns, but it may not be the most cost-effective solution for long-term storage of the
data, especially if the access pattern is variable and changes rapidly.
Option B is optimized for long-term archival storage and may not provide the immediate accessibility required by the company. Retrieving data
from Glacier storage typically incurs a longer retrieval time compared to other storage classes.
Option C is the appropriate choice for immediate availability and quick access to the data. It offers high durability, availability, and low latency
access, making it suitable for the company's needs. However, it is not the most cost-effective option for long-term storage.
Option D is a more cost-effective storage class compared to S3 Standard, especially for data that is accessed less frequently. However, since the
access pattern for the data is variable and changes rapidly, S3 Standard-IA may not be the most cost-effective solution, as it incurs additional
retrieval fees for frequent access.
upvoted 1 times
1 week, 4 days ago
Answer A: S3 Intelligent-Tiering is the recommended storage class for data with unknown, changing, or unpredictable access patterns, independent
of object size or retention period, such as data lakes, data analytics, and new applications.
upvoted 1 times
3 weeks, 1 day ago
The questions specifically says, data should me immediately available. So D can’t be true as S3 Infrequent access is for data which is not accessed
frequently. Don’t forget upto 3 months.
upvoted 2 times
1 month ago
Selected Answer: A
Amazon S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change,
without performance impact or operational overhead
upvoted 1 times
2 months, 2 weeks ago
Selected Answer: D
I think D and ChatGPT says D as well
upvoted 1 times
22 hours, 29 minutes ago
chatgpt isn't perfect yet. Most of them are wrong when it comes to problems.
Community vote distribution
A (78%)
D (22%)
upvoted 1 times
2 weeks, 5 days ago
ChatGpt is cheeks, eff that
upvoted 1 times
1 month, 2 weeks ago
ChatGPT is not always correct. Use your intelligence to answer questions
upvoted 2 times
3 months, 1 week ago
Definitely A
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: D
D is the correct answer for this use case
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: D
Response D, not A
S3 Intelligent-Tiering is a cost-optimized storage class that automatically moves data to the most cost-effective access tier based on changing
access patterns. Although it offers cost savings, it also introduces additional latency and retrieval time into the data retrieval process, which may
not meet the requirement of "immediately available" data.
On the other hand, S3 Standard-Infrequent Access (S3 Standard-IA) provides low cost storage with low latency and high throughput performance.
It is designed for infrequently accessed data that can be recreated if lost, and can be retrieved in a timely manner if required. It is a cost-effective
solution that meets the requirement of immediately available data and remains accessible for up to 3 months.
upvoted 1 times
5 months, 1 week ago
Changes rapidly and immidiately available so Answe is AAAAA.
upvoted 4 times
5 months, 1 week ago
Selected Answer: A
A looks correct
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: A
"The S3 access pattern for the data is variable and changes rapidly" => Use S3 intelligent tiering with smart enough to transit the prompt storage
class.
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: D
D. S3 Standard-Infrequent Access (S3 Standard-IA)
S3 Standard-IA is the most cost-effective storage class that meets the company's requirements. It provides immediate access to the data, and the
data remains accessible for up to 3 months. S3 Standard-IA is optimized for infrequently accessed data, which is suitable for the company's use
case of exporting the database once a day. This storage class also has a lower retrieval fee compared to S3 Glacier, which is important for the
company as the S3 access pattern for the data is variable and changes rapidly. S3 Intelligent-Tiering and S3 Standard are not the best choice in this
case because they are designed for frequently accessed data and have higher retrieval fees
upvoted 2 times
5 months, 1 week ago
The correct answer is A.
The S3 access pattern for the data is variable and changes rapidly.
upvoted 5 times
Topic 1
Question #213
A company is developing a new mobile app. The company must implement proper tra c ltering to protect its Application Load Balancer (ALB)
against common application-level attacks, such as cross-site scripting or SQL injection. The company has minimal infrastructure and operational
staff. The company needs to reduce its share of the responsibility in managing, updating, and securing servers for its AWS environment.
What should a solutions architect recommend to meet these requirements?
A. Con gure AWS WAF rules and associate them with the ALB.
B. Deploy the application using Amazon S3 with public hosting enabled.
C. Deploy AWS Shield Advanced and add the ALB as a protected resource.
D. Create a new ALB that directs tra c to an Amazon EC2 instance running a third-party rewall, which then passes the tra c to the current
ALB.
Correct Answer:
A
Highly Voted
5 months, 1 week ago
Selected Answer: C
C --- Read and understand the question. *The company needs to reduce its share of responsibility in managing, updating, and securing servers for
its AWS environment* Go with AWS Shield advanced --This is a managed service that includes AWS WAF, custom mitigations, and DDoS insight.
upvoted 11 times
2 months, 1 week ago
Brother answer is A, Read the question once again or ask CHATGPT for more in-depth analysis
upvoted 1 times
4 months ago
You stated, "This is a managed service that includes AWS WAF, custom mitigations, and DDoS insight." and you are correct. However, the service
you would actually have to setup to prevent SQL injection attacks is WAF.
upvoted 4 times
2 months, 1 week ago
exactly, thats like saying lets implemented NEtwork firewall Manager to manage WAF, absurd!
upvoted 2 times
Most Recent
15 hours, 26 minutes ago
Selected Answer: A
By configuring AWS WAF rules and associating them with the ALB, the company can filter and block malicious traffic before it reaches the
application. AWS WAF offers pre-configured rule sets and allows custom rule creation to protect against common vulnerabilities like XSS and SQL
injection.
Option B does not provide the necessary security and traffic filtering capabilities to protect against application-level attacks. It is more suitable for
hosting static content rather than implementing security measures.
Option C is focused on DDoS protection rather than application-level attacks like XSS or SQL injection. While AWS Shield Advanced does not
address the specific requirements mentioned in the scenario.
Option D involves maintaining and securing additional infrastructure, which goes against the requirement of reducing responsibility and relying on
minimal operational staff.
upvoted 1 times
2 weeks, 5 days ago
Selected Answer: C
With Shield advanced you get centralized protection management; this allows you to use AWS firewall manager (included in AWS Shield) with
policies automatically apply WAF to appliances. Massive sales pitch, see the link: https://aws.amazon.com/shield/features/
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Shield is not aimed to handle SQL injection.
upvoted 1 times
1 month, 2 weeks ago
Community vote distribution
A (65%)
C (35%)
Selected Answer: A
WAF = cross-site scripting or SQL injection
Shield/Shield Advanced = DDoS
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: A
Even with AWS Shield Advanced, you would still need to configure AWS WAF (only it's costing is included with Shield Adv.) rules to protect against
common application-level attacks such as cross-site scripting or SQL injection.
Since, there is no mention of protection against DDoS attacks, C is a more costly and not useful.
upvoted 2 times
2 months ago
Selected Answer: A
WAF == application-level attacks, such as cross-site scripting or SQL injection
A
upvoted 2 times
2 months, 1 week ago
Selected Answer: A
Answer is A, WAF will protect the infra from CSS typing of injections
while Sheild will be used to protect Infra from DDOS attacks
Dont get Confused.
only trick to get the right answer for the question is
read the question multiple times even when you are very confident about the answer you chose on first attempt
upvoted 4 times
2 months, 4 weeks ago
Answer is A
upvoted 1 times
3 months ago
Selected Answer: A
AWS WAF projects against SQL injection.
upvoted 1 times
3 months ago
CCCCCCCCCCCCCCCCCCCCC
upvoted 1 times
3 months ago
Selected Answer: A
Look at this https://repost.aws/knowledge-center/waf-rule-prevent-sqli-xss
upvoted 1 times
3 months, 1 week ago
Using AWS WAF has several benefits:
....
Presence of SQL code that is likely to be malicious (known as SQL injection).
Presence of a script that is likely to be malicious (known as cross-site scripting).
upvoted 1 times
3 months, 1 week ago
A...AWS WAF is a managed service that allows companies to protect their web applications from web exploits that might affect their applications,
including SQL injection and cross-site scripting. It provides an easy-to-use interface to configure, monitor, and manage web access control for
applications running on AWS. AWS WAF works with Amazon CloudFront and Application Load Balancer, making it easy to deploy security policies
for your web applications.
upvoted 1 times
3 months, 1 week ago
AAAAAAA. WAF - CF Application Load Balancer, API Gateway & AWS AppSync
upvoted 1 times
3 months, 2 weeks ago
reading up on AWS Shield Advanced, and I don't see anything regarding them help with managing or updating servers. Yes WAF integrates with SA
for free but when all you need is WAF, and IF SA does not help with reducing your server management, why pay for SA... it is very expensive.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
Selected Answer: A
"The company must implement proper traffic filtering to protect its Application Load Balancer (ALB) against common application-level attacks, such
as cross-site scripting or SQL injection." --- WAF monitors the Application Load Balancer or CloudFront will either allow this content to be received
or give an HTTP 403 status code. Also, WAF protects the Layer 7 (the Application Layer).
While AWS Shield Advanced, provides enhanced protections for applications running on Elastic Load Balancer, CloudFront, and Route 53 against
DDoS attack. Also, Shield protects the Layer 3 and 4, these layers are not for Application Layer. And most of all, Shield Advance is expensive, it costs
$3,000 USD per month.
So, the answer should be A -- AWS WAF.
upvoted 3 times
Topic 1
Question #214
A company’s reporting system delivers hundreds of .csv les to an Amazon S3 bucket each day. The company must convert these les to Apache
Parquet format and must store the les in a transformed data bucket.
Which solution will meet these requirements with the LEAST development effort?
A. Create an Amazon EMR cluster with Apache Spark installed. Write a Spark application to transform the data. Use EMR File System (EMRFS)
to write les to the transformed data bucket.
B. Create an AWS Glue crawler to discover the data. Create an AWS Glue extract, transform, and load (ETL) job to transform the data. Specify
the transformed data bucket in the output step.
C. Use AWS Batch to create a job de nition with Bash syntax to transform the data and output the data to the transformed data bucket. Use
the job de nition to submit a job. Specify an array job as the job type.
D. Create an AWS Lambda function to transform the data and output the data to the transformed data bucket. Con gure an event noti cation
for the S3 bucket. Specify the Lambda function as the destination for the event noti cation.
Correct Answer:
D
Highly Voted
5 months, 2 weeks ago
Selected Answer: B
It looks like AWS Glue allows fully managed CSV to Parquet conversion jobs: https://docs.aws.amazon.com/prescriptive-
guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html
upvoted 9 times
Most Recent
15 hours, 22 minutes ago
Selected Answer: B
AWS Glue is a fully managed ETL service that simplifies the process of preparing and transforming data for analytics. Using AWS Glue requires
minimal development effort compared to the other options.
Option A requires more development effort as it involves writing a Spark application to transform the data. It also introduces additional
infrastructure management with the EMR cluster.
Option C requires writing and managing custom Bash scripts for data transformation. It requires more manual effort and does not provide the
built-in capabilities of AWS Glue for data transformation.
Option D requires developing and managing a custom Lambda for data transformation. While Lambda can handle the transformation, it requires
more effort compared to AWS Glue, which is specifically designed for ETL operations.
Therefore, option B provides the easiest and least development effort by leveraging AWS Glue's capabilities for data discovery, transformation, and
output to the transformed data bucket.
upvoted 1 times
1 week, 4 days ago
Least development effort means lambda. Glue also works but more overhead and cost. A simple lambda like this
https://github.com/ayshaysha/aws-csv-to-parquet-converter/blob/main/csv-parquet-converter.py
can be used to convert as soon as you see files in s3 bucket.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html
upvoted 1 times
5 months ago
Selected Answer: B
S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only affect
new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make your bucket
secure as the unencrypted objects remain.
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and Amazon
S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 2 times
Community vote distribution
B (100%)
5 months ago
Versioning:
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted versions of
the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both versions of the
objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I believe is
what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations, whereas,
Versioning does not address the old unencrypted objects.
upvoted 1 times
5 months ago
Please delete this. I was meaning to place this response on a different question.
upvoted 1 times
5 months ago
Please delete this. I was meaning to place this response on a different question.
upvoted 1 times
5 months, 1 week ago
ETL = Glue
upvoted 3 times
5 months, 1 week ago
Selected Answer: B
B is the correct answer
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
AWS Glue Crawler is for ETL
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
The correct answer is B
upvoted 1 times
5 months, 2 weeks ago
B is the answer
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: B
ıt should be b
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
De acordo com a documentação, a resposta certa é B.
https://docs.aws.amazon.com/pt_br/prescriptive-guidance/latest/patterns/three-aws-glue-etl-job-types-for-converting-data-to-apache-
parquet.html
upvoted 1 times
5 months, 2 weeks ago
B is the ans
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
Answer is B
upvoted 1 times
5 months, 2 weeks ago
Option B sounds more plausible to me.
upvoted 1 times
Topic 1
Question #215
A company has 700 TB of backup data stored in network attached storage (NAS) in its data center. This backup data need to be accessible for
infrequent regulatory requests and must be retained 7 years. The company has decided to migrate this backup data from its data center to AWS.
The migration must be complete within 1 month. The company has 500 Mbps of dedicated bandwidth on its public internet connection available
for data transfer.
What should a solutions architect do to migrate and store the data at the LOWEST cost?
A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the les to Amazon S3 Glacier Deep Archive.
B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3
Glacier.
C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the les to
Amazon S3 Glacier Deep Archive.
D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy les from the on-premises
NAS storage to Amazon S3 Glacier.
Correct Answer:
A
15 hours, 13 minutes ago
Selected Answer: A
By ordering Snowball devices, the company can transfer the 700 TB of backup data from its data center to AWS. Once the data is transferred to S3,
a lifecycle policy can be applied to automatically transition the files from the S3 Standard storage class to the cost-effective Amazon S3 Glacier
Deep Archive storage class.
Option B would require continuous data transfer over the public internet, which could be time-consuming and costly given the large amount of
data. It may also require significant bandwidth allocation.
Option C would involve additional costs for provisioning and maintaining the dedicated connection, which may not be necessary for a one-time
data migration.
Option D could be a viable option, but it may incur additional costs for deploying and managing the DataSync agent.
Therefore, option A is the recommended choice as it provides a secure and efficient data transfer method using Snowball devices and allows for
cost optimization through lifecycle policies by transitioning the data to S3 Glacier Deep Archive for long-term storage.
upvoted 1 times
2 months, 1 week ago
A is the correct answer.
even though they have 500mbps internetspeed, it will take around 130days to transfer the data from on premises to AWS
so they have only 1 option which is Snowball devices
upvoted 2 times
2 months, 3 weeks ago
Selected Answer: A
A is the correct one
upvoted 1 times
3 months, 2 weeks ago
Q: What is AWS Snowball Edge?
AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and compute
power that provides select AWS services for use in edge locations. Snowball Edge comes in two options, Storage Optimized and Compute
Optimized, to support local data processing and collection in disconnected environments such as ships, windmills, and remote factories. Learn
more about its features here.
Q: What happened with the original 50 TB and 80 TB AWS Snowball devices?
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data
transfer.
Q: Can I still order the original Snowball 50 TB and 80 TB devices?
Community vote distribution
A (100%)
No. For data transfer needs now, please select the Snowball Edge Storage Optimized devices.
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
Snowball
upvoted 1 times
4 months, 1 week ago
9 Snowball devices are needed to migrate the 700TB of data.
upvoted 1 times
4 months, 1 week ago
700TB of Data can not be transferred through a 500Mbps link within one month.
Total data that can be transferred in one month = bandwidth x time
= (500 Mbps / 8 bits per byte) x (30 days x 24 hours x 3600 seconds per hour)
= 648,000 GB or 648 TB
This is calculated theoretically with the maximum available situation. Due to a number of factors, the actual total transferred Data may be less
than 645 TB.
upvoted 3 times
1 month, 3 weeks ago
Good thinking. Agree with the solution. Only the calculation is wrong. It should give 162tb as a result
upvoted 3 times
5 months, 1 week ago
Snow ball Devices the answe is AAAAA.
upvoted 2 times
5 months, 1 week ago
A is incorrect as DC is an expensive option. Correct answer should be C as the company already has 500Mbps that can be used for data transfer. By
consuming all the available internet bandwidth, data transfer will complete in 3 hours 6 mins - https://www.omnicalculator.com/other/data-transfer
upvoted 1 times
5 months, 1 week ago
Ignore please, miscalculated time to transfer, it will take 129 days and will breach the 1 month requirement. A is correct.
upvoted 5 times
5 months, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
5 months, 2 weeks ago
a is correct but not less expensive.I think,should be D.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
A is correct.
Cannot copy files directly from on-prem to S3 Glacier with DataSync. It should be S3 standard first, then configuration S3 Lifecycle to transit to
Glacier => Exclude D.
upvoted 1 times
5 months ago
yes you can - https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#using-storage-classes
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
The correct answer is A
upvoted 1 times
5 months, 2 weeks ago
Less expensive = Data Sync i guess (D)
upvoted 2 times
5 months ago
"The migration must be complete within 1 month" you can't complete this with transfer 500Mb/s. With that speed we need 129days to transfer.
Snowball is only way to do it in desired time.
upvoted 2 times
Topic 1
Question #216
A company has a serverless website with millions of objects in an Amazon S3 bucket. The company uses the S3 bucket as the origin for an
Amazon CloudFront distribution. The company did not set encryption on the S3 bucket before the objects were loaded. A solutions architect needs
to enable encryption for all existing objects and for all objects that are added to the S3 bucket in the future.
Which solution will meet these requirements with the LEAST amount of effort?
A. Create a new S3 bucket. Turn on the default encryption settings for the new S3 bucket. Download all existing objects to temporary local
storage. Upload the objects to the new S3 bucket.
B. Turn on the default encryption settings for the S3 bucket. Use the S3 Inventory feature to create a .csv le that lists the unencrypted
objects. Run an S3 Batch Operations job that uses the copy command to encrypt those objects.
C. Create a new encryption key by using AWS Key Management Service (AWS KMS). Change the settings on the S3 bucket to use server-side
encryption with AWS KMS managed encryption keys (SSE-KMS). Turn on versioning for the S3 bucket.
D. Navigate to Amazon S3 in the AWS Management Console. Browse the S3 bucket’s objects. Sort by the encryption eld. Select each
unencrypted object. Use the Modify button to apply default encryption settings to every unencrypted object in the S3 bucket.
Correct Answer:
B
Highly Voted
5 months, 2 weeks ago
Selected Answer: B
Step 1: S3 inventory to get object list
Step 2 (If needed): Use S3 Select to filter
Step 3: S3 object operations to encrypt the unencrypted objects.
On the going object use default encryption.
upvoted 10 times
5 months, 2 weeks ago
Useful ref link: https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/
upvoted 7 times
Most Recent
15 hours, 6 minutes ago
Selected Answer: B
By enabling default encryption settings on the S3, all newly added objects will be automatically encrypted. To encrypt the existing objects, the S3
Inventory feature can be used to generate a list of unencrypted objects. Then, an S3 Batch Operations job can be executed to copy those objects
while applying encryption.
A. This solution involves creating a new S3 and manually downloading and uploading all existing objects. It requires significant effort and time to
transfer millions of objects, making it a less efficient solution.
C. While enabling SSE with AWS KMS is a valid approach to encrypt objects in an S3, it does not address the requirement of encrypting existing
objects. It only applies encryption to new objects added to the bucket.
D. Manually modifying each object in the S3 to apply default encryption settings is a labor-intensive and error-prone process. It would require
individually selecting and modifying each unencrypted object, which is impractical for a large number of objects.
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: B
B...
https://catalog.us-east-1.prod.workshops.aws/workshops/05f16f1a-0bbf-45a7-a304-4fcd7fca3d1f/en-US/s3-track/module-2
You're welcome
upvoted 3 times
4 months, 2 weeks ago
Selected Answer: B
Amazon S3 now configures default encryption on all existing unencrypted buckets to apply server-side encryption with S3 managed keys (SSE-S3)
as the base level of encryption for new objects uploaded to these buckets. Objects that are already in an existing unencrypted bucket won't be
Community vote distribution
B (84%)
Other
automatically encrypted.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html
upvoted 3 times
4 months, 2 weeks ago
Selected Answer: B
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-copy-example-bucket-key.html
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
B is the correct answer
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
B 100%
https://spin.atomicobject.com/2020/09/15/aws-s3-encrypt-existing-objects/
upvoted 1 times
5 months ago
Selected Answer: A
Why is no one discussing A ? I think A can also achieve the required results. B is the most appropriate answer though.
upvoted 1 times
5 months ago
Selected Answer: B
S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only affect
new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make your bucket
secure as the unencrypted objects remain.
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and Amazon
S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 3 times
5 months ago
Versioning:
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted versions of
the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both versions of the
objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I believe is
what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations, whereas,
Versioning does not address the old unencrypted objects.
upvoted 1 times
5 months ago
S3 provides a single control to automatically encrypt all new objects in a bucket with SSE-S3 or SSE-KMS. Unfortunately, these controls only affect
new objects. If your bucket already contains millions of unencrypted objects, then turning on automatic encryption does not make your bucket
secure as the unencrypted objects remain.
For S3 buckets with a large number of objects (millions to billions), use Amazon S3 Inventory to get a list of the unencrypted objects, and Amazon
S3 Batch Operations to encrypt the large number of old, unencrypted files.
upvoted 1 times
5 months ago
Versioning:
When you overwrite an S3 object, it results in a new object version in the bucket. However, this will not remove the old unencrypted versions of
the object. If you do not delete the old version of your newly encrypted objects, you will be charged for the storage of both versions of the
objects.
S3 Lifecycle
If you want to remove these unencrypted versions, use S3 Lifecycle to expire previous versions of objects. When you add a Lifecycle
configuration to a bucket, the configuration rules apply to both existing objects and objects added later. C is missing this step, which I believe is
what makes B the better choice. B includes the functionality of encrypting the old unencrypted objects via Batch Operations, whereas,
Versioning does not address the old unencrypted objects.
upvoted 1 times
5 months ago
Please remove duplicate response as I was meaning to submit a voting comment.
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
C is wrong. Even though you turn on the SSE-KMS with a new key, the existing objects are still yet to be encrypted. They still need to be manually
encrypted by AWS batch
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
https://spin.atomicobject.com/2020/09/15/aws-s3-encrypt-existing-objects/
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
C is the answer
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
Agree with Parsons
upvoted 1 times
5 months, 1 week ago
the answer is C
also, the questions require future encryption of the objects is the S3 bucket = VERSIONING
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
could not open default encripton for exist bucket,so need to use KMS
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
The correct answer is C
upvoted 1 times
Topic 1
Question #217
A company runs a global web application on Amazon EC2 instances behind an Application Load Balancer. The application stores data in Amazon
Aurora. The company needs to create a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss. The
solution does not need to handle the load when the primary infrastructure is healthy.
What should a solutions architect do to meet these requirements?
A. Deploy the application with the required infrastructure elements in place. Use Amazon Route 53 to con gure active-passive failover. Create
an Aurora Replica in a second AWS Region.
B. Host a scaled-down deployment of the application in a second AWS Region. Use Amazon Route 53 to con gure active-active failover.
Create an Aurora Replica in the second Region.
C. Replicate the primary infrastructure in a second AWS Region. Use Amazon Route 53 to con gure active-active failover. Create an Aurora
database that is restored from the latest snapshot.
D. Back up data with AWS Backup. Use the backup to create the required infrastructure in a second AWS Region. Use Amazon Route 53 to
con gure active-passive failover. Create an Aurora second primary instance in the second Region.
Correct Answer:
D
Highly Voted
5 months, 2 weeks ago
Selected Answer: A
A is correct.
- "The solution does not need to handle the load when the primary infrastructure is healthy." => Should use Route 53 Active-Passive ==> Exclude
B, C
- D is incorrect because "Create an Aurora second primary instance in the second Region.", we need to create an Aurora Replica enough.
upvoted 15 times
5 months, 2 weeks ago
Ref link: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html
upvoted 3 times
Highly Voted
4 months, 3 weeks ago
Selected Answer: D
I am confused within A and D but I think D is the answer because this seems to be a cost related problem, a replica is kind of a standby and you
can promote to be the main db anytime without any much downtime, but here it says it can withstand 30 mins of downtime so we can just keep a
backup of the instance and then create a DB whenever required from the backup, hence less cost
upvoted 6 times
Most Recent
3 weeks, 2 days ago
Selected Answer: D
I vote D, because option A is not highly available. In option A, you can't configure active-passive failover because you haven't created a backup
infrastructure.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: A
It is a cross region DR strategy. You need a read replica and Application in another region to have a realistic DR option. The read replica will take
few minutes to to promoted/Active and the application is available. Option D lacks clarity on application and Backups can take time to restore.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: A
Depending on the Regions involved and the amount of data to be copied, a cross-Region snapshot copy can take hours to complete and will be a
factor to consider for the RPO requirements. You need to take this into account when you estimate the RPO of this DR strategy.
If you have strict RTO and RPO requirements, you should consider a different DR strategy, such as Amazon Aurora Global Database .
https://aws.amazon.com/blogs/database/cost-effective-disaster-recovery-for-amazon-aurora-databases-using-aws-backup/
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: D
Community vote distribution
A (67%)
D (33%)
The solution does not need to handle the load when the primary infrastructure is healthy. -> Amazon Route 53 active-passive failover -> A,D
The company can tolerate up to 30 minutes of downtime and potential data loss -> backup -> D
you don't have to use read replicas if you can tolerate downtime and data loss.
upvoted 3 times
4 months, 1 week ago
Consider Answer B.
It is suggesting a Pilot Light DR strategy.
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 2 times
3 months, 3 weeks ago
I will Vote B and i initially thought it Pilot Light however after 2nd read, it seem it more like warm standby. Option D looks more like back up
and Restore strategy and it will take more than 30 minutes to get it done. C is wrong, snapshot takes longer time to restore
upvoted 1 times
3 months, 3 weeks ago
The key sentence is
"a disaster recovery solution and can tolerate up to 30 minutes of downtime and potential data loss"
Take a look at the visualization in the URL provided. Pilot light = 30 minutes.
upvoted 2 times
5 months, 1 week ago
Selected Answer: A
A is correct
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
aaaaaaaa
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
answer is d
upvoted 1 times
5 months, 2 weeks ago
Ans is A
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
A is correct answer.
https://www.examtopics.com/discussions/amazon/view/81439-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/81439-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #218
A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is
assigned to the EC2 instance. The default network ACL has been modi ed to block all tra c. A solutions architect needs to make the web server
accessible from everywhere on port 443.
Which combination of steps will accomplish this task? (Choose two.)
A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0.
B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0.
C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0.
D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0.
E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination
0.0.0.0/0.
Correct Answer:
AE
Highly Voted
5 months, 2 weeks ago
Selected Answer: AE
A, E is perfect the combination. To be more precise, We should add outbound with "outbound TCP port 32768-65535 to destination 0.0.0.0/0." as
an ephemeral port due to the stateless of NACL.
upvoted 8 times
Most Recent
3 weeks, 6 days ago
32768-65535 ports Allows outbound IPv4 responses to clients on the internet (for example, serving webpages to people visiting the web servers in
the subnet).
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: AE
NACL blocks outgoing traffic since it is infact stateless..Option E allows outbound traffic from ephemeral ports going outside of the VPC back to the
web.
upvoted 2 times
3 months, 3 weeks ago
It can't be C, since the current NACL blocks all traffic, including outbound. Need to allow outbound traffic through the NACL.
But E is a bad answer, since ephemeral ports start at 1024, not 32768.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: AC
A and C not E
Option E states to allow incoming TCP ports on 443 and outgoing on 32768-65535 to all IP addresses (0.0.0.0/0). This option only allows outgoing
ports and does not guarantee that incoming connections on 443 will be allowed. It does not meet the requirement of making the web server
accessible on port 443 from anywhere. Therefore, option C which states to allow incoming TCP ports on 443 from all IP addresses is the best
answer to meet the requirements.
upvoted 2 times
4 months ago
Answer : AE - Incoming traffic on port 443 but sever can use any port to reply back.
upvoted 2 times
5 months, 1 week ago
Selected Answer: AE
AE correct
upvoted 3 times
5 months, 1 week ago
Selected Answer: AE
A & E , E as NACL is stateless.
upvoted 2 times
Community vote distribution
AE (89%)
11%
5 months, 2 weeks ago
AE:
https://www.examtopics.com/discussions/amazon/view/29767-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: AE
https://www.examtopics.com/discussions/amazon/view/29767-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: AE
A E is correct
upvoted 1 times
5 months, 2 weeks ago
Ans AE
upvoted 1 times
Topic 1
Question #219
A company’s application is having performance issues. The application is stateful and needs to complete in-memory tasks on Amazon EC2
instances. The company used AWS CloudFormation to deploy infrastructure and used the M5 EC2 instance family. As tra c increased, the
application performance degraded. Users are reporting delays when the users attempt to access the application.
Which solution will resolve these issues in the MOST operationally e cient way?
A. Replace the EC2 instances with T3 EC2 instances that run in an Auto Scaling group. Make the changes by using the AWS Management
Console.
B. Modify the CloudFormation templates to run the EC2 instances in an Auto Scaling group. Increase the desired capacity and the maximum
capacity of the Auto Scaling group manually when an increase is necessary.
C. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Use Amazon CloudWatch built-in EC2 memory
metrics to track the application performance for future capacity planning.
D. Modify the CloudFormation templates. Replace the EC2 instances with R5 EC2 instances. Deploy the Amazon CloudWatch agent on the EC2
instances to generate custom application latency metrics for future capacity planning.
Correct Answer:
D
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
D is the correct answer.
"in-memory tasks" => need the "R" EC2 instance type to archive memory optimization. So we are concerned about C & D.
Because EC2 instances don't have built-in memory metrics to CW by default. As a result, we have to install the CW agent to archive the purpose.
upvoted 14 times
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
It's D, EC2 do not provide by default memory metrics to CloudWatch and require the CloudWatch Agent to be installed on the monitored instances
: https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-memory-metrics-ec2/
upvoted 5 times
Most Recent
1 month ago
Selected Answer: D
Option D is the correct answer.
upvoted 1 times
2 months, 4 weeks ago
will go for C
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
Would go with D
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
ı think D
upvoted 1 times
Community vote distribution
D (100%)
Topic 1
Question #220
A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly
variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed
within a few seconds after a request is made.
Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost?
A. An AWS Glue job
B. An AWS Lambda function
C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS)
D. A containerized service hosted in Amazon ECS with Amazon EC2
Correct Answer:
B
Highly Voted
5 months, 2 weeks ago
Selected Answer: B
B is the correct answer.
API Gateway + Lambda is the perfect solution for modern applications with serverless architecture.
upvoted 5 times
Most Recent
1 month ago
Selected Answer: B
Option B meets the requirements.
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
Lambda !
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/43780-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Community vote distribution
B (100%)
Topic 1
Question #221
A company runs an application on a group of Amazon Linux EC2 instances. For compliance reasons, the company must retain all application log
les for 7 years. The log les will be analyzed by a reporting tool that must be able to access all the les concurrently.
Which storage solution meets these requirements MOST cost-effectively?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Amazon EC2 instance store
D. Amazon S3
Correct Answer:
D
Community vote distribution
D (100%)
6 days, 22 hours ago
s3<efs<ebs
upvoted 1 times
2 weeks, 1 day ago
"The log files will be analyzed by a reporting tool that must be able to access all the files concurrently" , so you need to access concurrently to get
the logs. So is EFS. Letter B
upvoted 1 times
2 weeks, 4 days ago
https://aws.amazon.com/efs/faq/
EFS is a file storage service for use with Amazon compute (EC2, containers, serverless) and on-premises servers. EFS provides a file system interface,
file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to thousands of EC2 instances.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: D
Whenever we see long time storage and no special requirements that needs EFS or FSx, then S3 is the way.
upvoted 1 times
3 months ago
Selected Answer: D
To meet the requirements of retaining application log files for 7 years and allowing concurrent access by a reporting tool, while also being cost-
effective, the recommended storage solution would be D: Amazon S3.
upvoted 2 times
3 months ago
ddddddddddddddddddd
upvoted 1 times
3 months ago
What about the keyword "concurrently"? Doesn't this mean EFS?
upvoted 3 times
5 months, 1 week ago
Selected Answer: D
Cost Effective: S3
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
S3 is enough with the lowest cost perspective.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/22182-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Topic 1
Question #222
A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an automated tool that is hosted in an
AWS account that the vendor owns. The vendor does not have IAM access to the company’s AWS account.
How should a solutions architect grant this access to the vendor?
A. Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the appropriate IAM policies to the role for
the permissions that the vendor requires.
B. Create an IAM user in the company’s account with a password that meets the password complexity requirements. Attach the appropriate
IAM policies to the user for the permissions that the vendor requires.
C. Create an IAM group in the company’s account. Add the tool’s IAM user from the vendor account to the group. Attach the appropriate IAM
policies to the group for the permissions that the vendor requires.
D. Create a new identity provider by choosing “AWS account” as the provider type in the IAM console. Supply the vendor’s AWS account ID and
user name. Attach the appropriate IAM policies to the new provider for the permissions that the vendor requires.
Correct Answer:
A
Highly Voted
5 months, 1 week ago
Selected Answer: A
A is proper
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 7 times
Most Recent
3 weeks, 6 days ago
Selected Answer: C
....................................
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: A
Option A fulfill the requirements.
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
IAM role is the answer
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
A is correct answer.
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: A
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
A is the correct answer.
upvoted 3 times
5 months, 2 weeks ago
Community vote distribution
A (84%)
Other
Selected Answer: D
My guess is D: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
upvoted 2 times
Topic 1
Question #223
A company has deployed a Java Spring Boot application as a pod that runs on Amazon Elastic Kubernetes Service (Amazon EKS) in private
subnets. The application needs to write data to an Amazon DynamoDB table. A solutions architect must ensure that the application can interact
with the DynamoDB table without exposing tra c to the internet.
Which combination of steps should the solutions architect take to accomplish this goal? (Choose two.)
A. Attach an IAM role that has su cient privileges to the EKS pod.
B. Attach an IAM user that has su cient privileges to the EKS pod.
C. Allow outbound connectivity to the DynamoDB table through the private subnets’ network ACLs.
D. Create a VPC endpoint for DynamoDB.
E. Embed the access keys in the Java Spring Boot code.
Correct Answer:
AD
4 weeks, 1 day ago
Selected Answer: AD
A & D options fulfill the requirements.
upvoted 1 times
5 months, 1 week ago
Selected Answer: AD
Definitely
upvoted 1 times
5 months, 1 week ago
Selected Answer: AD
A D are the correct options
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: AD
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
https://aws.amazon.com/about-aws/whats-new/2019/09/amazon-eks-adds-support-to-assign-iam-permissions-to-kubernetes-service-accounts/
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: AD
A, D is the correct answer.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: AD
The correct answer is A,D
upvoted 1 times
Community vote distribution
AD (100%)
Topic 1
Question #224
A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The
company wants to redesign its application architecture to be highly available and fault tolerant. Tra c must reach all running EC2 instances
randomly.
Which combination of steps should the company take to meet these requirements? (Choose two.)
A. Create an Amazon Route 53 failover routing policy.
B. Create an Amazon Route 53 weighted routing policy.
C. Create an Amazon Route 53 multivalue answer routing policy.
D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.
E. Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.
Correct Answer:
CE
Highly Voted
3 months, 4 weeks ago
Selected Answer: BE
I went back and rewatched the lectures from Udemy on Weighted and Multi-Value. The lecturer said that Multi-value is *not* as substitute for ELB
and he stated that DNS load balancing is a good use case for Weighted routing policies
upvoted 5 times
2 weeks, 3 days ago
Weighted routing based on weight assigned, it can not do randomly choose, please see last sentence of the question choose randomly.
upvoted 2 times
Most Recent
5 days, 17 hours ago
Selected Answer: CE
Randomly is the key word
upvoted 1 times
2 weeks, 3 days ago
Selected Answer: CE
C: Multi-value To route traffic approximately randomly to multiple resources and have healt check
B: Weighted default use for when you need load to one server more than ohter server. if you need for random to all server should be letter in this C
options "and weight to all server with same value".
upvoted 1 times
2 weeks, 3 days ago
Selected Answer: CE
Must C and E, B is not correct because it based on the assigned weight it can not do randomly
upvoted 1 times
3 weeks ago
Selected Answer: CE
Option C, creating an Amazon Route 53 multivalue answer routing policy, is the correct choice. With this routing policy, Route 53 returns multiple
IP addresses for the same domain name, allowing the traffic to be distributed randomly among the available EC2 instances. This ensures that the
traffic is evenly distributed across the instances launched in different Availability Zones, achieving the desired randomness and load balancing.
Option E is the correct choice. By launching instances in different Availability Zones, the company ensures that there are redundant copies of the
application running in separate physical locations, providing fault tolerance. With two instances in one Availability Zone and two instances in
another, traffic can be distributed randomly among them, improving availability and load balancing.
upvoted 1 times
3 weeks, 2 days ago
Selected Answer: CE
https://aws.amazon.com/route53/faqs/
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: BE
I vote for B & E options.
Community vote distribution
CE (51%)
BE (49%)
upvoted 1 times
1 month, 1 week ago
It can also be A) failover routing policy:
"Active-active failover:
Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable,
Route 53 can detect that it's unhealthy and stop including it when responding to queries.
In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as
weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy record".
upvoted 1 times
1 month, 1 week ago
Selected Answer: CE
After read the doc, I understood that the question does not ask to route the traffic with a specific proportion (in this case, it would be Weighted
routing policy). Question requires to be Random, so the only option that do this really randomly is Multivalue answer routing policy.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: BE
Multivalue answer routing policy allows Route 53 to respond to DNS queries with up to eight healthy records selected at random, but it does not
allow you to specify the proportion of traffic that each record receives. Weighted routing policy allows you to route traffic randomly to all running
EC2 instances based on the weights that you assign to each instance.
upvoted 4 times
1 month, 3 weeks ago
Selected Answer: CE
C: For traffic to go to EC2 'Randomly', R53 will answer the IP's of all EC2's and the client will choose randomly, while maintaining high availability
and fault tolerance as unhealthy IP's will not be sent forward as the answer to the DNS query.
upvoted 1 times
2 months ago
Selected Answer: CE
Route 53 now supports multivalue answers in response to DNS queries. While not a substitute for a load balancer, the ability to return multiple
health-checkable IP addresses in response to DNS queries is a way to use DNS to improve availability and load balancing. If you want to route
traffic randomly to multiple resources, such as web servers, you can create one multivalue answer record for each resource and, optionally,
associate an Amazon Route 53 health check with each record. Amazon Route 53 supports up to eight healthy records in response to each DNS
query.
https://aws.amazon.com/route53/faqs/
upvoted 1 times
2 months ago
Selected Answer: BE
BE.
C is wrong because Multi Value means AWS returning multiple DNS records for clients, which is not the case we are talking about.
upvoted 3 times
2 months, 1 week ago
Selected Answer: CE
C: To route traffic approximately randomly to multiple resources, such as web servers, you create one multivalue answer record for each resource.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-multivalue.html
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: BE
I think its BE and ChatGPT agrees
upvoted 3 times
2 months, 3 weeks ago
Selected Answer: CE
CE
after some investigation I am going with Multi Value, due to the fact that as long as the health checks are good, all records will be provided evenly.
(Video)
https://www.youtube.com/watch?v=K9GD_3CeAik
With weighted you have you have to configure the weight for each record, and it is not telling us, all weights are the same. p.s. you can configure
health checks for the weight. (Video)
https://www.youtube.com/watch?v=Bto9aN2VFT0
upvoted 2 times
2 months, 4 weeks ago
Selected Answer: BE
I don't know why C is the answer. Multi-value returns records for the client to choose. It has nothing to do with "Traffic must reach all running EC2
instances randomly".
upvoted 2 times
Topic 1
Question #225
A media company collects and analyzes user activity data on premises. The company wants to migrate this capability to AWS. The user activity
data store will continue to grow and will be petabytes in size. The company needs to build a highly available data ingestion solution that facilitates
on-demand analytics of existing data and new data with SQL.
Which solution will meet these requirements with the LEAST operational overhead?
A. Send activity data to an Amazon Kinesis data stream. Con gure the stream to deliver the data to an Amazon S3 bucket.
B. Send activity data to an Amazon Kinesis Data Firehose delivery stream. Con gure the stream to deliver the data to an Amazon Redshift
cluster.
C. Place activity data in an Amazon S3 bucket. Con gure Amazon S3 to run an AWS Lambda function on the data as the data arrives in the S3
bucket.
D. Create an ingestion service on Amazon EC2 instances that are spread across multiple Availability Zones. Con gure the service to forward
data to an Amazon RDS Multi-AZ database.
Correct Answer:
A
4 days, 16 hours ago
petabytes in size => redshift
upvoted 2 times
2 weeks, 1 day ago
It's A. Data Stream is better in this case, and you can query data in S3 with Athena
upvoted 1 times
1 week, 4 days ago
Data Stream Can't write to S3. That's why B is only left correct answer.
upvoted 1 times
2 days, 11 hours ago
Answer A… key phrase’ least operational overhead’
KDF can write to S3 … https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: B
Option B is correct answer.
upvoted 1 times
2 months ago
Selected Answer: B
This solution meets the requirements as follows:
• Kinesis Data Firehose can scale to ingest and process multiple terabytes per hour of streaming data. This can easily handle the petabyte-scale
data volumes.
• Firehose can deliver the data to Redshift, a petabyte-scale data warehouse, enabling on-demand SQL analytics of the data.
• Redshift is a fully managed service, minimizing operational overhead. Firehose is also fully managed, handling scalability, availability, and
durability of the streaming data ingestion.
upvoted 1 times
3 months ago
Selected Answer: B
B: The answer is certainly option "B" because ingesting user activity data can easily be handled by Amazon Kinesis Data streams. The ingested data
can then be sent into Redshift for Analytics.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift Serverless lets you access and analyze
data without all of the configurations of a provisioned data warehouse.
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
upvoted 2 times
3 months, 2 weeks ago
Community vote distribution
B (89%)
11%
the Key sentence here is: "that facilitates on-demand analytics", tthats the reason because we need to choose Kinesis Data streams over Data
Firehose
upvoted 1 times
5 months, 1 week ago
Selected Answer: B
B: Kinesis Data Firehose service automatically load the data into Amazon Redshift and is a petabyte-scale data warehouse service. It allows you to
perform on-demand analytics with minimal operational overhead. Since the requirement didn't state what kind of analytics you need to run, we can
assume that we do not need to set up additional services to provide further analytics. Thus, it has the least operational overhead.
Why not A: It is a viable solution, but storing the data in S3 would require you to set up additional services like Amazon Redshift or Amazon Athena
to perform the analytics.
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
Data ingestion through Kinesis data streams will require manual intervention to provide more shards as data size grows. Kinesis firehose will ingest
data with the least operational overhead.
upvoted 4 times
5 months, 1 week ago
Selected Answer: A
I think the key word in the question is "ingestion"...whish is data stream
Data Streams is a low latency streaming service in AWS Kinesis with the facility for ingesting at scale. On the other hand, Kinesis Firehose aims to
serve as a data transfer service. The primary purpose of Kinesis Firehose focuses on loading streaming data to Amazon S3, Splunk, ElasticSearch,
and RedShift
upvoted 3 times
5 months, 1 week ago
Selected Answer: B
petabytes: redshift
upvoted 3 times
5 months, 1 week ago
Selected Answer: B
Amazon Kinesis Data Firehose + Redshift meets the requirements
upvoted 1 times
5 months, 2 weeks ago
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data
and scale to a petabyte or more. This allows you to use your data to gain new insights for your business and customers.
The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can
upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance
using the same SQL-based tools and business intelligence applications that you use today.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
for Analytics of Petabyte size data, it should be Redshift cluster
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: B
B is the correct answer.
We cannot ingest data from KDS to S3 => A is rollout.
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/83853-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: B
No it's B
upvoted 2 times
Topic 1
Question #226
A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance.
The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices
will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Use AWS Glue to process the raw data in Amazon S3.
B. Use Amazon Route 53 to route tra c to different EC2 instances.
C. Add more EC2 instances to accommodate the increasing amount of incoming data.
D. Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the data.
E. Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Con gure Amazon Kinesis Data Firehose to use the data
stream as a source to deliver the data to Amazon S3.
Correct Answer:
AE
Highly Voted
5 months, 2 weeks ago
Selected Answer: AE
A, E is the correct answer
"RESTful web services" => API Gateway.
"EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket" => GLUE with (Extract - Transform -
Load)
upvoted 8 times
Most Recent
1 week, 2 days ago
Why not BC?
upvoted 1 times
2 weeks ago
Why it not CE?
Add more EC2 instances to accommodate the increasing amount of incoming data?
upvoted 1 times
6 days, 4 hours ago
EC2 is not server-less. they want to minimize overhead
upvoted 1 times
1 month, 1 week ago
Selected Answer: AE
minimizes operational overhead = Serverless
Glue, Kinesis Datastream, S3 are serverless
upvoted 1 times
4 months, 1 week ago
How about "C" to increase EC2 instances for the increased devices soon?
upvoted 1 times
5 months, 1 week ago
Selected Answer: AE
Glue and API
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: AE
https://www.examtopics.com/discussions/amazon/view/83387-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Community vote distribution
AE (100%)
Topic 1
Question #227
A company needs to retain its AWS CloudTrail logs for 3 years. The company is enforcing CloudTrail across a set of AWS accounts by using AWS
Organizations from the parent account. The CloudTrail target S3 bucket is con gured with S3 Versioning enabled. An S3 Lifecycle policy is in
place to delete current objects after 3 years.
After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects has continued to rise. However, the number
of new CloudTrail logs that are delivered to the S3 bucket has remained consistent.
Which solution will delete objects that are older than 3 years in the MOST cost-effective manner?
A. Con gure the organization’s centralized CloudTrail trail to expire objects after 3 years.
B. Con gure the S3 Lifecycle policy to delete previous versions as well as current versions.
C. Create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years.
D. Con gure the parent account as the owner of all objects that are delivered to the S3 bucket.
Correct Answer:
B
4 weeks, 1 day ago
Selected Answer: B
I go for option B.
upvoted 1 times
1 month ago
I don't think it's possible to configure an S3 lifecycle policy to delete all versions of an object, so B is wrong ... I think the question is improperly
worded
upvoted 1 times
1 month, 3 weeks ago
• Versioning has caused the number of objects to increase over time, even as current objects are deleted after 3 years. By deleting previous
versions as well, this will clean up old object versions and reduce storage costs. • An S3 Lifecycle policy incurs no additional charges and requires
no additional resources to configure and run. It is a native S3 tool for managing object lifecycles cost-effectively.
upvoted 1 times
2 months ago
Selected Answer: B
This is the most cost-effective option because:
• Versioning has caused the number of objects to increase over time, even as current objects are deleted after 3 years. By deleting previous
versions as well, this will clean up old object versions and reduce storage costs.
• An S3 Lifecycle policy incurs no additional charges and requires no additional resources to configure and run. It is a native S3 tool for managing
object lifecycles cost-effectively.
upvoted 2 times
2 months ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html
upvoted 2 times
5 months ago
Selected Answer: C
A more cost-effective solution would be to configure the organization's centralized CloudTrail trail to expire objects after 3 years. This would ensure
that all objects, including previous versions, are deleted after the specified retention period.
Another option would be to create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years, this
would allow you to have more control over the deletion process and to write a custom logic that best fits your use case.
upvoted 3 times
5 months ago
Selected Answer: B
The question clearly says "An S3 Lifecycle policy is in place to delete current objects after 3 years". This implies that previous versions are not
deleted, since this is a separate setting, and since logs are constantly changed, it would seem to make sense to delete previous versions so, so B. D
is wrong, since the parent account (the management account) will already be the owner of all objects delivered to the S3 bucket, "All accounts in
the organization can see MyOrganizationTrail in their list of trails, but member accounts cannot remove or modify the organization trail. Only the
management account or delegated administrator account can change or delete the trail for the organization.", see
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
Community vote distribution
B (82%)
C (18%)
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
B is the right answer. Ref: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/best-practices-
security.html#:~:text=The%20CloudTrail%20trail,time%20has%20passed.
Option A is wrong. No way to expire the cloudtrail logs
upvoted 3 times
5 months, 1 week ago
Selected Answer: B
Configure the S3 Lifecycle policy to delete previous versions
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
upvoted 1 times
5 months, 1 week ago
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
B is correct answer
upvoted 2 times
5 months, 2 weeks ago
Ans: A
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
When you create an organization trail, a trail with the name that you give it is created in every AWS account that belongs to your organization.
Users with CloudTrail permissions in member accounts can see this trail when they log into the AWS CloudTrail console from their AWS accounts, or
when they run AWS CLI commands such as describe-trail. However, users in member accounts do not have sufficient permissions to delete the
organization trail, turn logging on or off, change what types of events are logged, or otherwise change the organization trail in any way.
upvoted 1 times
5 months, 2 weeks ago
correction: Ans D is the answer.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
B. Configure the S3 Lifecycle policy to delete previous versions as well as current versions.
To delete objects that are older than 3 years in the most cost-effective manner, the company should configure the S3 Lifecycle policy to delete
previous versions as well as current versions. This will ensure that all versions of the objects, including the previous versions, are deleted after 3
years.
upvoted 1 times
Topic 1
Question #228
A company has an API that receives real-time data from a eet of monitoring devices. The API stores this data in an Amazon RDS DB instance for
later analysis. The amount of data that the monitoring devices send to the API uctuates. During periods of heavy tra c, the API often returns
timeout errors.
After an inspection of the logs, the company determines that the database is not capable of processing the volume of write tra c that comes
from the API. A solutions architect must minimize the number of connections to the database and must ensure that data is not lost during periods
of heavy tra c.
Which solution will meet these requirements?
A. Increase the size of the DB instance to an instance type that has more available memory.
B. Modify the DB instance to be a Multi-AZ DB instance. Con gure the application to write to all active RDS DB instances.
C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that
Amazon SQS invokes to write data from the queue to the database.
D. Modify the API to write incoming data to an Amazon Simple Noti cation Service (Amazon SNS) topic. Use an AWS Lambda function that
Amazon SNS invokes to write data from the topic to the database.
Correct Answer:
C
6 days, 15 hours ago
I think D, "Use an AWS Lambda function that Amazon SQS invokes to write data from the queue to the database" SQS can't invokes Lambda
becouse SQS is pull.
upvoted 1 times
1 week, 6 days ago
Why not B
upvoted 1 times
3 months, 1 week ago
C is in deed the correct answer for the use case
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: C
C is correct
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: C
Cis correct
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: C
C looks ok
upvoted 1 times
4 months, 3 weeks ago
why not D?
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
C is correct.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Community vote distribution
C (100%)
C. Modify the API to write incoming data to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function that Amazon
SQS invokes to write data from the queue to the database.
To minimize the number of connections to the database and ensure that data is not lost during periods of heavy traffic, the company should
modify the API to write incoming data to an Amazon SQS queue. The use of a queue will act as a buffer between the API and the database,
reducing the number of connections to the database. And the use of an AWS Lambda function invoked by SQS will provide a more flexible way of
handling the data and processing it. This way, the function will process the data from the queue and insert it into the database in a more controlled
way.
upvoted 2 times
5 months, 1 week ago
Did you use ChatGPT?
upvoted 6 times
4 months ago
same question as you :D
upvoted 1 times
Topic 1
Question #229
A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as
demand increases or decreases. The company needs a new solution that simpli es the process of adding or removing compute capacity to or
from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.
Which solution meets these requirements?
A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.
C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.
D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
Correct Answer:
A
4 weeks, 1 day ago
Selected Answer: A
Option A is right answer.
upvoted 1 times
4 months, 1 week ago
Selected Answer: A
A is correct because aurora might be more expensive but its serverless and is much faster
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
A is porper
https://aws.amazon.com/rds/aurora/serverless/
upvoted 3 times
5 months, 1 week ago
Selected Answer: A
Aurora MySQL
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/51509-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Community vote distribution
A (100%)
Topic 1
Question #230
A company is concerned that two NAT instances in use will no longer be able to support the tra c needed for the company’s application. A
solutions architect wants to implement a solution that is highly available, fault tolerant, and automatically scalable.
What should the solutions architect recommend?
A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone.
B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones.
C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones.
D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.
Correct Answer:
C
3 weeks, 2 days ago
Selected Answer: C
HA: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
Scalability: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
upvoted 1 times
4 months, 1 week ago
Selected Answer: C
fyi yall in most cases nat instances are a bad thing because their customer managed while nat gateways are AWS Managed. So in this case I already
know to get rid of the nat instances the reason its c is because it wants high availability meaning different AZs
upvoted 2 times
4 months, 2 weeks ago
Could anybody teach me why the B cannot be correct answer? This solution also seems providing Scalability(Auto Scaling Group), High
Availability(different AZ), and Fault Tolerance(NLB & AZ).
I honestly think that C is not enough, because each NAT gateway can provide a few scalability, but the bandwidth limit is clearly explained in the
document. The C exactly mentioned "two NAT gateways" so the number of NAT is fixed, which will reach its limit soon.
upvoted 2 times
4 months, 1 week ago
Option B proposes to use an Auto Scaling group with Network Load Balancers to continue using the existing two NAT instances. However, NAT
instances do not support automatic failover without a script, unlike NAT gateways which provide this functionality. Additionally, using Network
Load Balancers to balance traffic between NAT instances adds more complexity to the solution.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
upvoted 2 times
5 months ago
C. If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down,
resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each
Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html#nat-gateway-basics
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
Replace NAT Instances with Gateway
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Correct answer is C
upvoted 2 times
Community vote distribution
C (100%)
Topic 1
Question #231
An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B.
Both VPCs are in the same AWS account.
Which solution will provide the required access MOST securely?
A. Create a DB instance security group that allows all tra c from the public IP address of the application server in VPC A.
B. Con gure a VPC peering connection between VPC A and VPC B.
C. Make the DB instance publicly accessible. Assign a public IP address to the DB instance.
D. Launch an EC2 instance with an Elastic IP address into VPC B. Proxy all requests through the new EC2 instance.
Correct Answer:
B
Highly Voted
5 months ago
A is correct. B will work but is not the most secure method, since it will allow everything in VPC A to talk to everything in VPC B and vice versa, not
at all secure. A on the other hand will only allow the application (since you select it's IP address) to talk to the application server in VPC A - you are
allowing only the required connectivity. See the link for this exact use case:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html
upvoted 7 times
5 months ago
" allows all traffic from the public IP address" Nice bro niceee This is absolutely the most secure method at all. :)))
upvoted 9 times
2 months, 3 weeks ago
he must be the security engineer lolol :D
"Jaybee" - Please dont ever say that traffic over the public internet is secure :D
upvoted 1 times
3 months, 1 week ago
:)))))))))
upvoted 1 times
Most Recent
1 week, 6 days ago
Selected Answer: B
I don't like A because the security group setting is wrong as it set up to allow all public IP addresses. If the security group setting is correct, then I
will go for A
I don't like B because it need to set up security group as well on top of peering.
for exam purpose only, I will go with the least worst choice which is B
upvoted 1 times
2 weeks, 1 day ago
Selected Answer: A
The keywords are: "access MOST securely", hence the option A meets these requirements.
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: A
Each VPC security group rule makes it possible for a specific source to access a DB instance in a VPC that is associated with that VPC security group.
The source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide
upvoted 1 times
3 weeks, 1 day ago
Selected Answer: B
Most secure = VPC peering
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: B
I vote for option B.
Community vote distribution
B (74%)
A (26%)
upvoted 1 times
1 month ago
Selected Answer: B
BBBB. A is not secure
upvoted 1 times
3 months ago
Selected Answer: A
peering is not secure to B as no more control on access from A to B
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
B But what a crappy question/answers ...
upvoted 3 times
4 months, 3 weeks ago
Answer is B,
A is not the answer <--it is not SECURE to have your traffic flow out from the internet to database.
upvoted 4 times
4 months, 3 weeks ago
Selected Answer: B
Should B)
upvoted 1 times
5 months ago
Selected Answer: B
Answer: B
upvoted 2 times
5 months ago
Selected Answer: B
A) not possible, DB instance not have a public ip.
upvoted 2 times
5 months ago
Selected Answer: A
Agreeing with JayBee65. See link for exact solution:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SettingUp.html#CHAP_SettingUp.SecurityGroup
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SettingUp.html#CHAP_SettingUp.SecurityGroup
upvoted 2 times
5 months, 2 weeks ago
Ans: B
https://aws.amazon.com/premiumsupport/knowledge-center/rds-connectivity-instance-subnet-vpc/
My DB instance can't be accessed by an Amazon EC2 instance from a different VPC
Create a VPC peering connection between the VPCs. A VPC peering connection allows two VPCs to communicate with each other using private IP
addresses.
1. Create and accept a VPC peering connection.
Important: If the VPCs are in the same AWS account, be sure that the IPv4 CIDR blocks don't overlap. For more information, see VPC peering
limitations.
2. Update both route tables.
3. Update your security groups to reference peer VPC groups.
4. Activate DNS resolution support for your VPC peering connection.
5. On the Amazon Elastic Compute Cloud (Amazon EC2) instance, test the VPC peering connection by using a networking utility. See the following
example:
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: B
B. Configure a VPC peering connection between VPC A and VPC B.
The most secure solution to provide access to the database in VPC B from the application running on an EC2 instance in VPC A is to configure a
VPC peering connection between the two VPCs. This will allow the application to access the database using the private IP addresses, and will not
require any public IP addresses or Internet access. The traffic will be confined to the VPCs, and can be further secured with security group rules.
upvoted 2 times
Topic 1
Question #232
A company runs demonstration environments for its customers on Amazon EC2 instances. Each environment is isolated in its own VPC. The
company’s operations team needs to be noti ed when RDP or SSH access to an environment has been established.
A. Con gure Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected.
B. Con gure the EC2 instances with an IAM instance pro le that has an IAM role with the AmazonSSMManagedInstanceCore policy attached.
C. Publish VPC ow logs to Amazon CloudWatch Logs. Create required metric lters. Create an Amazon CloudWatch metric alarm with a
noti cation action for when the alarm is in the ALARM state.
D. Con gure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Noti cation. Con gure an Amazon Simple
Noti cation Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.
Correct Answer:
C
Highly Voted
5 months, 1 week ago
Selected Answer: C
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 8 times
4 months, 4 weeks ago
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-records-examples.html#flow-log-example-accepted-rejected
Adding this to support that VPC flow logs can be used to cvapture Accepted or Rejected SSH and RDP traffic.
upvoted 2 times
1 month ago
I don't think C would be an acceptable solution ... the request is to be notified WHEN a SSH and/or RDP connection is established so it
requires real-time monitoring and that is something the C solution does not provide ... I would select A as a correct answer
upvoted 1 times
Most Recent
2 weeks, 1 day ago
Selected Answer: C
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log
data can be published to the following locations: Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose. After you create a flow
log, you can retrieve and view the flow log records in the log group, bucket, or delivery stream that you configured.
Flow logs can help you with a number of tasks, such as:
Diagnosing overly restrictive security group rules
Monitoring the traffic that is reaching your instance
Determining the direction of the traffic to and from the network interfaces
Ref link: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
upvoted 1 times
3 weeks ago
Selected Answer: C
seems like c:
https://aws.amazon.com/tr/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 1 times
3 weeks ago
Selected Answer: D
D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic. This setup allows the EventBridge rule to capture
instance state change events, such as when RDP or SSH access is established. The rule can then send notifications to the specified SNS topic, which
is subscribed by the operations team.
upvoted 2 times
1 week, 3 days ago
D is wrong. EC2 instance state change is only for pending, running etc. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-
instance-state-changes.html you can't have state change of ssh or rdp.
upvoted 1 times
Community vote distribution
C (71%)
D (18%)
11%
2 months, 3 weeks ago
Selected Answer: C
C:
https://www.youtube.com/watch?v=KAe3Eju59OU
upvoted 1 times
3 months, 4 weeks ago
Selected Answer: C
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 1 times
5 months ago
Selected Answer: A
A. Configuring Amazon CloudWatch Application Insights to create AWS Systems Manager OpsItems when RDP or SSH access is detected would be
the most appropriate solution in this scenario. This would allow the operations team to be notified when RDP or SSH access has been established
and provide them with the necessary information to take action if needed. Additionally, Amazon CloudWatch Application Insights would allow for
monitoring and troubleshooting of the system in real-time.
upvoted 1 times
5 months ago
Selected Answer: C
EC2 Instance State-change Notifications are not the same as RDP or SSH established connection notifications. Use Amazon CloudWatch Logs to
monitor SSH access to your Amazon EC2 Linux instances so that you can monitor rejected (or established) SSH connection requests and take
action.
upvoted 4 times
5 months, 1 week ago
Selected Answer: A
The Answer can be A or C depending on the requirement if it requires real-time notification.
A: Allows the operations team to be notified in real-time when access is established, and also provides visibility into the access events through the
OpsItems.
C: The logs will need to be analyzed and metric filters applied to detect access, and then the alarm will trigger based on that analysis. This method
could have a delay in providing notifications. Thus, not the best solution if real-time notification is required.
Why not D: RDP or SSH access does not cause an EC2 instance to have a state change. The state change events that Amazon EventBridge can listen
for include stopping, starting, and terminated instances, which do not apply to RDP or SSH access. But RDP or SSH connection to an EC2 instance
does generate an event in the system, such as a log entry which can be used to notify the Operation team. Since its a log, you would require a
service that monitors logs like CloudTrail, VPC Flow logs, or AWS Systems Manager Session Manager.
upvoted 2 times
5 months ago
I completely agree with the logic here, but I'm thinking C, since I believe you will need to "Create required metric filters" in order to detect RDP
or SSH access, and this is not specified in the question, see https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter-create-
OpsItems-from-CloudWatch-Alarms.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: C
It's C fam. RDP or SSH connections won't change the state of the EC2 instance, so D doesn't make sense.
upvoted 4 times
5 months, 2 weeks ago
D. Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic.
EC2 instances sends events to the EventBridge when state change occurs, such as when a new RDP or SSH connection is established, you can use
EventBridge to configure a rule that listens for these events and trigger an action, like sending an email or SMS, when the connection is detected.
The operations team can be notified by subscribing to the Amazon Simple Notification Service (Amazon SNS) topic, which can be configured as the
target of the EventBridge rule.
upvoted 3 times
5 months, 1 week ago
Are state changes pending:
running
stopping
stopped
shutting-down
terminated
https://aws.amazon.com/blogs/security/how-to-monitor-and-visualize-failed-ssh-access-attempts-to-amazon-ec2-linux-instances/
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
Configure an Amazon EventBridge rule to listen for events of type EC2 Instance State-change Notification. Configure an Amazon Simple
Notification Service (Amazon SNS) topic as a target. Subscribe the operations team to the topic. This approach allows you to set up a rule that
listens for state change events on the EC2 instances, specifically for when RDP or SSH access is established, and trigger a notification via Amazon
SNS to the operations team. This way they will be notified when RDP or SSH access to an environment has been established.
upvoted 3 times
3 months, 2 weeks ago
um, isn't "EC2 Instance State-change" like running, terminated, or stopped?
upvoted 1 times
Topic 1
Question #233
A solutions architect has created a new AWS account and must secure AWS account root user access.
Which combination of actions will accomplish this? (Choose two.)
A. Ensure the root user uses a strong password.
B. Enable multi-factor authentication to the root user.
C. Store root user access keys in an encrypted Amazon S3 bucket.
D. Add the root user to a group containing administrative permissions.
E. Apply the required permissions to the root user with an inline policy document.
Correct Answer:
AB
1 week, 1 day ago
Selected Answer: AB
Options A & B are the CORRECT answers.
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: AB
Options A & B are the right answers.
upvoted 1 times
1 month, 3 weeks ago
Selected Answer: AB
See https://docs.aws.amazon.com/SetUp/latest/UserGuide/best-practices-root-user.html
upvoted 1 times
3 months ago
Selected Answer: AB
A and B are the correct answers:
Option A: A strong password is always required for any AWS account you create, and should not be shared or stored anywhere as there is always a
risk.
Option B: This is following AWS best practice, by enabling MFA on your root user which provides another layer of security on the account and
unauthorised access will be denied if the user does not have the correct password and MFA.
upvoted 1 times
3 months, 2 weeks ago
Selected Answer: AB
AB are the right answers.
upvoted 1 times
3 months, 3 weeks ago
This is probably the hardest question in AWS history
upvoted 3 times
4 months, 4 weeks ago
Selected Answer: AB
AB is the only feasible answer here.
upvoted 3 times
5 months ago
Selected Answer: BE
B. Enabling multi-factor authentication for the root user provides an additional layer of security to ensure that only authorized individuals are able
to access the root user account.
E. Applying the required permissions to the root user with an inline policy document ensures that the root user only has the necessary permissions
to perform the necessary tasks, and not any unnecessary permissions that could potentially be misused.
upvoted 2 times
Community vote distribution
AB (71%)
BD (19%)
10%
5 months ago
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
upvoted 1 times
5 months ago
The other options are not sufficient to secure the root user access because:
A. A strong password alone is not enough to protect against potential security threats such as phishing or brute force attacks.
C. Storing the root user access keys in an encrypted S3 bucket does not address the root user's authentication process.
D. Adding the root user to a group with administrative permissions does not address the root user's authentication process and does not
provide an additional layer of security.
upvoted 1 times
2 months, 2 weeks ago
Strong passwords + multi factor is the counter to brute force...
upvoted 1 times
5 months ago
Selected Answer: AB
AB obviusly
upvoted 1 times
5 months, 1 week ago
Selected Answer: AB
Root user already has admin, so D is not correct
upvoted 1 times
5 months, 1 week ago
Selected Answer: AB
AB are correct
upvoted 1 times
5 months, 1 week ago
Selected Answer: AB
D is incorrect as root user already has full admin access.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: AB
D. Add the root user to a group containing administrative permissions. >>its not about security,actually its unsecure so >> a&B
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: BD
BD is correct
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: BD
https://www.examtopics.com/discussions/amazon/view/21794-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
5 months ago
What would D achieve exactly??? :)
upvoted 1 times
5 months, 1 week ago
AB are correct in this link
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: AB
https://docs.aws.amazon.com/accounts/latest/reference/best-practices-root-user.html
* Enable AWS multi-factor authentication (MFA) on your AWS account root user. For more information, see Using multi-factor authentication (MFA)
in AWS in the IAM User Guide.
* Never share your AWS account root user password or access keys with anyone.
* Use a strong password to help protect access to the AWS Management Console. For information about managing your AWS account root user
password, see Changing the password for the root user.
upvoted 1 times
Topic 1
Question #234
A company is building a new web-based customer relationship management application. The application will use several Amazon EC2 instances
that are backed by Amazon Elastic Block Store (Amazon EBS) volumes behind an Application Load Balancer (ALB). The application will also use
an Amazon Aurora database. All data for the application must be encrypted at rest and in transit.
Which solution will meet these requirements?
A. Use AWS Key Management Service (AWS KMS) certi cates on the ALB to encrypt data in transit. Use AWS Certi cate Manager (ACM) to
encrypt the EBS volumes and Aurora database storage at rest.
B. Use the AWS root account to log in to the AWS Management Console. Upload the company’s encryption certi cates. While in the root
account, select the option to turn on encryption for all data at rest and in transit for the account.
C. Use AWS Key Management Service (AWS KMS) to encrypt the EBS volumes and Aurora database storage at rest. Attach an AWS Certi cate
Manager (ACM) certi cate to the ALB to encrypt data in transit.
D. Use BitLocker to encrypt all data at rest. Import the company’s TLS certi cate keys to AWS Key Management Service (AWS KMS) Attach the
KMS keys to the ALB to encrypt data in transit.
Correct Answer:
C
2 weeks, 3 days ago
Selected Answer: C
Option C it's correct
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: C
Option C fulfills the requirements.
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
C is correct ,A REVERSES the work ofeach service.
upvoted 3 times
5 months, 1 week ago
Selected Answer: C
C is correct!
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: C
c is correct answer
upvoted 2 times
Community vote distribution
C (100%)
Topic 1
Question #235
A company is moving its on-premises Oracle database to Amazon Aurora PostgreSQL. The database has several applications that write to the
same tables. The applications need to be migrated one by one with a month in between each migration. Management has expressed concerns
that the database has a high number of reads and writes. The data must be kept in sync across both databases throughout the migration.
What should a solutions architect recommend?
A. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC)
replication task and a table mapping to select all tables.
B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture
(CDC) replication task and a table mapping to select all tables.
C. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a memory optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables.
D. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance.
Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
Correct Answer:
C
Highly Voted
4 months, 3 weeks ago
Selected Answer: C
C : because we need SCT to convert from Oracle to PostgreSQL, and we need memory optimized machine for databases not compute optimized.
upvoted 6 times
Most Recent
1 month ago
BBBBBBBBBBBBB
upvoted 2 times
2 months, 1 week ago
B chatgpt
upvoted 2 times
4 months, 1 week ago
DMS+SCT for Oracle to Aurora PostgreSQL migration
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-
sct.html
upvoted 2 times
4 months, 4 weeks ago
https://aws.amazon.com/ko/premiumsupport/knowledge-center/dms-memory-optimization/
upvoted 1 times
5 months ago
Selected Answer: C
It has to be either C or D because it requires Schema Conversion Tool to convert Oracle database to Amazon Aurora PostgreSQL. C would be the
better choice here because it replicates a memory optimized instance, which is recommended for databases. Also, the database must be kept in
sync, so they require mapping to select all tables.
upvoted 3 times
5 months ago
A or C are both valid options. Both options involve using AWS DataSync for the initial migration, and then using AWS Database Migration Service
(AWS DMS) to create a change data capture (CDC) replication task for ongoing data synchronization.
Option A: Uses a memory optimized replication instance.
Option C: Uses a compute optimized replication instance.
Option A is a better choice for migrations where the data is more complex and may require more memory.
Option C is a better choice for migrations that require more processing power.
It is also depend on the size of the data, the complexity of the data, and the resources available in the target Aurora cluster.
upvoted 1 times
5 months ago
Why would you not use the schema conversion tool, which is designed specifically to covert form one db engine to another. It can convert Oracle
to Aurora PostgreSQL, see https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html. Then it is a choice of C or
Community vote distribution
C (87%)
13%
D. Since you want to move all tables C makes more sense that D.
A and B are wrong since DataSync deals with data not databases, see https://aws.amazon.com/datasync/faqs/.
upvoted 4 times
5 months, 1 week ago
Selected Answer: A
Initial migration is full using DataSync and on-going replication is through CDC for the changes. The full load was already performed so no need to
do it again as with Answer B.
upvoted 1 times
5 months ago
Changing my answer to C as you need schema conversion from Oracle the PostgreSQL
upvoted 2 times
5 months, 1 week ago
Correct answer is C
upvoted 2 times
5 months, 1 week ago
Selected Answer: A
A is correct. Initial migration is full using DataSync and on-going replication is through CDC Task -
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html
upvoted 1 times
5 months, 2 weeks ago
B. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture
(CDC) replication task and a table mapping to select all tables.
AWS DataSync can be used for the initial migration of the data, it can transfer large amount of data quickly and securely over the network. AWS
Database Migration Service (AWS DMS) can be used to replicate changes made to the data in the source database to the target database. A full
load plus CDC replication task allows for the initial migration of the data and then continuously replicate any changes made to the data in the
source database to the target database. This will ensure that the data is kept in sync across both databases throughout the migration process.
Selecting all tables in the table mapping will ensure that all data is replicated, as the migration process will be done in several steps, it will be
important to make sure that all data is kept in sync.
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: C
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/46704-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
Topic 1
Question #236
A company has a three-tier application for image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2
instance for the application layer, and a third EC2 instance for a MySQL database. A solutions architect must design a scalable and highly
available solution that requires the least amount of change to the application.
Which solution meets these requirements?
A. Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the application layer. Move the database to an Amazon
DynamoDB table. Use Amazon S3 to store and serve users’ images.
B. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS DB instance with multiple read replicas to serve users’ images.
C. Use Amazon S3 to host the front-end layer. Use a eet of EC2 instances in an Auto Scaling group for the application layer. Move the
database to a memory optimized instance type to store and serve users’ images.
D. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an
Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.
Correct Answer:
A
Highly Voted
5 months ago
Selected Answer: B
B and D very similar with D being the 'best' solution but it is not the one that requires the least amount of development changes as the application
would need to be changed to store images in S3 instead of DB
upvoted 5 times
Most Recent
1 week, 3 days ago
"least amount of change to the application." - A has lots of changes, completely revamping the application and lots of new pieces. D is closest with
only addition of s3 to store images which is right move. You do not want images to store in any database anyway.
upvoted 1 times
4 weeks, 1 day ago
Selected Answer: D
Option D meets the requirements.
upvoted 1 times
3 months, 1 week ago
D is correct
upvoted 2 times
5 months ago
Selected Answer: D
RDS multi AZ.
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
D is correct as application changes needs to me minimal
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
Correct answer is D
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
for "Highly available": Multi-AZ &
for "least amount of changes to the application": Elastic Beanstalk automatically
handles the deployment, from capacity provisioning, load balancing, auto-scaling to
application health monitoring
upvoted 4 times
Community vote distribution
D (75%)
B (25%)
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/24840-exam-aws-certified-solutions-architect-associate-saa-c02/
Please ExamTopics, review your own answers
upvoted 4 times
Topic 1
Question #237
An application running on an Amazon EC2 instance in VPC-A needs to access les in another EC2 instance in VPC-B. Both VPCs are in separate
AWS accounts. The network administrator needs to design a solution to con gure secure access to EC2 instance in VPC-B from VPC-A. The
connectivity should not have a single point of failure or bandwidth concerns.
Which solution will meet these requirements?
A. Set up a VPC peering connection between VPC-A and VPC-B.
B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.
C. Attach a virtual private gateway to VPC-B and set up routing from VPC-A.
D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A.
Correct Answer:
A
Highly Voted
5 months ago
Selected Answer: A
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely
on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 6 times
Most Recent
2 weeks ago
D, VPC PEERINGVIS IN SAME ACCOUNT
upvoted 1 times
1 week, 3 days ago
No, VPC Peering can use across account.
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 1 times
1 month ago
DDDDDDDDDDDDDD
upvoted 2 times
1 month ago
This is the only viable solution
Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-A
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
"You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account."
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times
5 months ago
Selected Answer: A
correct answer is A and as mentioned by JayBee65 below, key reason being that solution should not have a single point of failure and bandwidth
restrictions
the following paragraph is taken from the AWS docs page linked below that backs this up
"AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely
on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck."
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: B
Community vote distribution
A (94%)
6%
A VPC endpoint gateway to the EC2 Instance is more specific and more secure than forming a VPC peering that exposes the whole of the VPC
infrastructure just for one connection.
upvoted 1 times
5 months ago
Your logic is correct but security is not a requirement here - the requirements are "The connectivity should not have a single point of failure or
bandwidth concerns." A VPC gateway endpoint" would form a single point of failure, so B is incorrect, (and C and D are incorrect for the same
reason, they create single points of failure).
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: A
Correct answer is A
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: A
VPC peering allows resources in different VPCs to communicate with each other as if they were within the same network. This solution would
establish a direct network route between VPC-A and VPC-B, eliminating the need for a single point of failure or bandwidth concerns.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
https://www.examtopics.com/discussions/amazon/view/27763-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
Topic 1
Question #238
A company wants to experiment with individual AWS accounts for its engineer team. The company wants to be noti ed as soon as the Amazon
EC2 instance usage for a given month exceeds a speci c threshold for each account.
What should a solutions architect do to meet this requirement MOST cost-effectively?
A. Use Cost Explorer to create a daily report of costs by service. Filter the report by EC2 instances. Con gure Cost Explorer to send an Amazon
Simple Email Service (Amazon SES) noti cation when a threshold is exceeded.
B. Use Cost Explorer to create a monthly report of costs by service. Filter the report by EC2 instances. Con gure Cost Explorer to send an
Amazon Simple Email Service (Amazon SES) noti cation when a threshold is exceeded.
C. Use AWS Budgets to create a cost budget for each account. Set the period to monthly. Set the scope to EC2 instances. Set an alert
threshold for the budget. Con gure an Amazon Simple Noti cation Service (Amazon SNS) topic to receive a noti cation when a threshold is
exceeded.
D. Use AWS Cost and Usage Reports to create a report with hourly granularity. Integrate the report data with Amazon Athena. Use Amazon
EventBridge to schedule an Athena query. Con gure an Amazon Simple Noti cation Service (Amazon SNS) topic to receive a noti cation when
a threshold is exceeded.
Correct Answer:
B
Highly Voted
5 months, 2 weeks ago
Selected Answer: C
AWS Budgets allows you to create budgets for your AWS accounts and set alerts when usage exceeds a certain threshold. By creating a budget for
each account, specifying the period as monthly and the scope as EC2 instances, you can effectively track the EC2 usage for each account and be
notified when a threshold is exceeded. This solution is the most cost-effective option as it does not require additional resources such as Amazon
Athena or Amazon EventBridge.
upvoted 5 times
Most Recent
4 months ago
Selected Answer: D
I go with D. It says "as soon as", "daily" reports seems to be a bit longer time frame to wait in my opinion.
upvoted 1 times
3 months, 3 weeks ago
Athena can only be use in s3, that is enough to discard D
upvoted 1 times
4 months ago
Actually, I take that back. It clearly says "Cost effective."
upvoted 3 times
5 months, 1 week ago
C: AWS Budgets allows you to set a budget for costs and usage for your accounts and you can set alerts when the budget threshold is exceeded in
real-time which meets the requirement.
Why not B: B would be the most cost-effective if the requirements didn't ask for real-time notification. You would not incur additional costs for the
daily or monthly reports and the notifications. But doesn't provide real-time alerts.
upvoted 4 times
5 months, 1 week ago
Selected Answer: C
Agree...C
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Answer is C
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
Community vote distribution
C (92%)
8%
https://aws.amazon.com/getting-started/hands-on/control-your-costs-free-tier-budgets/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
AWS budget IMO, it's done for it
upvoted 2 times
Topic 1
Question #239
A solutions architect needs to design a new microservice for a company’s application. Clients must be able to call an HTTPS endpoint to reach the
microservice. The microservice also must use AWS Identity and Access Management (IAM) to authenticate calls. The solutions architect will write
the logic for this microservice by using a single AWS Lambda function that is written in Go 1.x.
Which solution will deploy the function in the MOST operationally e cient way?
A. Create an Amazon API Gateway REST API. Con gure the method to use the Lambda function. Enable IAM authentication on the API.
B. Create a Lambda function URL for the function. Specify AWS_IAM as the authentication type.
C. Create an Amazon CloudFront distribution. Deploy the function to Lambda@Edge. Integrate IAM authentication logic into the
Lambda@Edge function.
D. Create an Amazon CloudFront distribution. Deploy the function to CloudFront Functions. Specify AWS_IAM as the authentication type.
Correct Answer:
A
Highly Voted
5 months, 2 weeks ago
Selected Answer: A
A. Create an Amazon API Gateway REST API. Configure the method to use the Lambda function. Enable IAM authentication on the API.
This option is the most operationally efficient as it allows you to use API Gateway to handle the HTTPS endpoint and also allows you to use IAM to
authenticate the calls to the microservice. API Gateway also provides many additional features such as caching, throttling, and monitoring, which
can be useful for a microservice.
upvoted 11 times
Most Recent
3 weeks, 6 days ago
Selected Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html
upvoted 1 times
4 months ago
A is crt 100%
upvoted 2 times
4 months, 1 week ago
Why c is not correct? ?
upvoted 3 times
1 month ago
Lambda@Edge only support NodeJS or Python
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: A
https://asanchez.dev/blog/deploy-api-go-aws-lambda-gateway/
upvoted 1 times
5 months, 2 weeks ago
D
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
upvoted 1 times
5 months ago
With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in JavaScript for high-scale, latency-sensitive CDN
customizations. But you are using Go 1.x. Lambda supports go. So A makes a lot more sense than D
upvoted 1 times
Community vote distribution
A (92%)
8%
Topic 1
Question #240
A company previously migrated its data warehouse solution to AWS. The company also has an AWS Direct Connect connection. Corporate o ce
users query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 50 MB and each
webpage sent by the visualization tool is approximately 500 KB. Result sets returned by the data warehouse are not cached.
Which solution provides the LOWEST data transfer egress cost for the company?
A. Host the visualization tool on premises and query the data warehouse directly over the internet.
B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet.
C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same
AWS Region.
D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a Direct Connect connection at a location in
the same Region.
Correct Answer:
C
Highly Voted
3 months, 4 weeks ago
Selected Answer: D
A. --> No since if you access via internet you are creating egress traffic.
B. -->It's a good choice to have both DWH and visualization in the same region to lower the egress transfer (i.e. data going egress/out of the
region) but if you access over internet you might still have egress transfer.
C. -> Valid but in this case you send out of AWS 50MB if you query the DWH instead of the visualization tool, D removes this need since puts the
visualization tools in AWS with the DWH so reduces data returned out of AWS from 50MB to 500KB
D. --> Correct, see explanation on answer C
-------------------------------------------------------------------------------------------------------------------------------------------
Useful links:
AWS Direct Connect connection create a connection in an AWS Direct Connect location to establish a network connection from your premises to an
AWS Region.
https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
upvoted 5 times
Most Recent
5 months ago
Selected Answer: D
D let you reduce at minimum the data transfer costs
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
D: Direct Connect connection at a location in the same Region will provide the lowest data transfer egress cost, improved performance, and lower
complexity
Why it is not C is because the visualization tool is hosted on-premises, as it's not hosted in the same region as the data warehouse the data transfer
between them would occur over the internet, thus, would incur in egress data transfer costs.
upvoted 4 times
1 week, 3 days ago
C option doesnt travel through internet because we have a direct connect. If you are hosting your visualization tool in same region why you
need a direct connection which D has? Doesn't make sense. So, C is the right answer.
upvoted 1 times
5 months, 1 week ago
Selected Answer: C
https://www.nops.io/reduce-aws-data-transfer-costs-dont-get-stung-by-hefty-egress-fees/
upvoted 2 times
5 months ago
Whilst "Direct Connect can help lower egress costs even after taking the installation costs into account. This is because AWS charges lower
transfer rates." D removes the need to send the query results out of AWS and instead returns the web page, so reduces data returned from
50MB to 500KB, so D
upvoted 1 times
5 months, 2 weeks ago
Community vote distribution
D (90%)
10%
Selected Answer: D
Correct answer is D
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: D
Should be D
https://aws.amazon.com/directconnect/pricing/
https://aws.amazon.com/blogs/aws/aws-data-transfer-prices-reduced/
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/47140-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Topic 1
Question #241
An online learning company is migrating to the AWS Cloud. The company maintains its student records in a PostgreSQL database. The company
needs a solution in which its data is available and online across multiple AWS Regions at all times.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Migrate the PostgreSQL database to a PostgreSQL cluster on Amazon EC2 instances.
B. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance with the Multi-AZ feature turned on.
C. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Create a read replica in another Region.
D. Migrate the PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. Set up DB snapshots to be copied to another Region.
Correct Answer:
C
Highly Voted
3 months, 4 weeks ago
Selected Answer: C
Multi az is not the same as multi regional
upvoted 15 times
Highly Voted
5 months, 1 week ago
Selected Answer: B
B: Amazon RDS Multi-AZ feature automatically creates a synchronous replica in another availability zone and failover to the replica in the event of
an outage. This will provide high availability and data durability across multiple AWS regions which fit the requirements.
Though C may sound good, it in fact requires manual management and monitoring of the replication process due to the fact that Amazon RDS
read replicas are asynchronous, meaning there is a delay between the primary and read replica. Therefore, there will be a need to ensure that the
read replica is constantly up-to-date and someone still has to fix any read replica errors during the replication process which may cause data
inconsistency. Lastly, you still have to configure additional steps to make it fail over to the read replica.
upvoted 12 times
4 months, 1 week ago
I go with option B because:
Multi-AZ is for high availiblity
Read replicas are for low-latency
in question they talk about available online
upvoted 3 times
5 months ago
But the question is clearly asking for Multiple Regions. Multi-AZ is not across Regions.
upvoted 15 times
5 months ago
You are right, Multi-AZ is only within one Region. C would be the right answer.
upvoted 10 times
2 weeks, 2 days ago
https://aws.amazon.com/rds/features/multi-az/
smartegnine 0 minutes ago Awaiting moderator approval
Selected Answer: B
In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously
replicates the data to an instance in a different AZ.
upvoted 1 times
Most Recent
1 week, 5 days ago
Selected Answer: B
C and D just specifiy another single region. This does not translate to multiple regions.
B (Multi-AZ) means the solution will be highly available.
The data will be available in multiple regions for both B and C but B is a better solution!
upvoted 1 times
1 week, 6 days ago
Community vote distribution
C (71%)
B (29%)
Selected Answer: C
Answer B is not right, because "RDS Multi-AZ" always span at least two Availability Zones within a single region and the question requirment RDS
DB should be available in multiple regions. Therefore, C is the most suitable answer for this question.
upvoted 1 times
1 week, 5 days ago
I would like to change my answer to "B". The question has some distractor words: "its data is available and online across multiple AWS Regions
at all times". We agree that AWS is a could service available online around the world in 99 regions. So the option "B" is the most appropriate
answer, since multi-AZ focuses on the avialability factor and it has the LEAST amount of operational overhead.
upvoted 1 times
2 weeks ago
Selected Answer: B
B & C both makes data available. However, B is less overhead.
What I think, the question is asking for data availability across multiple regions not for a DR solution. So, RDS being accessible over public IP will do
the trick for data being available across regions.
upvoted 1 times
2 weeks, 1 day ago
Selected Answer: C
Option meets the requirements, ref. link: https://aws.amazon.com/blogs/database/best-practices-for-amazon-rds-for-postgresql-cross-region-
read-replicas/
upvoted 1 times
2 weeks, 2 days ago
Selected Answer: B
In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates the
data to an instance in a different AZ.
https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
1 month ago
Selected Answer: C
B is wrong because Multi AZ feature don't allow to have replicas in another region!!!! (the requirement is that "data should be available and online
across multiple AWS Regions at all times") ... only feasible option is C
upvoted 1 times
1 month, 1 week ago
Multi-AZ provides redundancy within a single Region, it does not replicate data across multiple Regions. If the requirement specifically states the
need for data availability across multiple Regions, creating a read replica in another Region (option C) would be the more appropriate choice.
upvoted 1 times
1 month, 2 weeks ago
Selected Answer: C
Multi region
upvoted 2 times
1 month, 3 weeks ago
Selected Answer: B
option C could make the data available across multiple AWS Regions, but it may not provide the same level of availability and minimal operational
overhead as option B.
upvoted 1 times
3 months ago
Selected Answer: C
Replica can do across multiple AWS Regions
upvoted 4 times
3 months ago
Selected Answer: B
B: Use Multi-AZ deployments for High Availability/Failover and Read Replicas for read scalability.
upvoted 2 times
4 months, 1 week ago
Option "C" would be a better solution.
Option "B" not specifically mention about cross multiple Regions.
upvoted 2 times
4 months, 2 weeks ago
Selected Answer: C
"online across multiple AWS Regions"
in B we did not have any words about Regions, Multi-AZ only for one region!
upvoted 4 times
4 months, 3 weeks ago
Selected Answer: C
C is the correct answer, read replicas can be created cross region and can be promoted to be main database
upvoted 4 times
4 months, 3 weeks ago
Selected Answer: B
requires manual intervention to promote the read replica
upvoted 2 times
Topic 1
Question #242
A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2
instances be returned in response to DNS queries.
Which policy should be used to meet this requirement?
A. Simple routing policy
B. Latency routing policy
C. Multivalue routing policy
D. Geolocation routing policy
Correct Answer:
C
Highly Voted
5 months, 1 week ago
Selected Answer: C
Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing
when you want to associate your routing records with a Route 53 health check. For example, use multivalue answer routing when you need to
return multiple values for a DNS query and route traffic to multiple IP addresses.
https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
upvoted 7 times
Most Recent
3 months, 1 week ago
IP are returned RANDOMLY for multi-value Routing, is this what we want ?
upvoted 3 times
3 months, 2 weeks ago
Selected Answer: C
Multivalue answer routing policy ...answer is C
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
Answer is C
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: C
Should be C
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/46491-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/46491-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Community vote distribution
C (100%)
Topic 1
Question #243
A medical research lab produces data that is related to a new study. The lab wants to make the data available with minimum latency to clinics
across the country for their on-premises, le-based applications. The data les are stored in an Amazon S3 bucket that has read-only permissions
for each clinic.
What should a solutions architect recommend to meet these requirements?
A. Deploy an AWS Storage Gateway le gateway as a virtual machine (VM) on premises at each clinic
B. Migrate the les to each clinic’s on-premises applications by using AWS DataSync for processing.
C. Deploy an AWS Storage Gateway volume gateway as a virtual machine (VM) on premises at each clinic.
D. Attach an Amazon Elastic File System (Amazon EFS) le system to each clinic’s on-premises servers.
Correct Answer:
C
Highly Voted
5 months, 2 weeks ago
Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure
integration between an organization's on-premises IT environment and AWS's storage infrastructure. By deploying a file gateway as a virtual
machine on each clinic's premises, the medical research lab can provide low-latency access to the data stored in the S3 bucket while maintaining
read-only permissions for each clinic. This solution allows the clinics to access the data files directly from their on-premises file-based applications
without the need for data transfer or migration.
upvoted 10 times
Most Recent
4 weeks ago
Selected Answer: A
Option A meets the requirements.
upvoted 1 times
2 months, 1 week ago
For File-based applications use File Gateway: (Option A)
upvoted 1 times
3 months, 1 week ago
Definitely A.
Why are there so many wrong answers by Admins?
upvoted 4 times
3 months, 4 weeks ago
Selected Answer: A
Amazon S3 File Gateway enables you to store file data as objects in Amazon S3 cloud storage for data lakes, backups, and Machine Learning
workflows. With Amazon S3 File Gateway, each file is stored as an object in Amazon S3 with a one-to-one mapping between a file and an object.
Volume Gateway provides block storage volumes over iSCSI, backed by Amazon S3, and provides point-in-time backups as Amazon EBS snapshots.
Volume Gateway integrates with AWS Backup, an automated and centralized backup service, to protect Storage Gateway volumes.
So it's A
upvoted 3 times
3 months, 4 weeks ago
Selected Answer: A
A for answer
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: A
https://cloud.in28minutes.com/aws-certification-aws-storage-gateway
upvoted 1 times
5 months ago
Selected Answer: A
Community vote distribution
A (92%)
8%
A. Deploy an AWS Storage Gateway file gateway...
upvoted 1 times
5 months, 1 week ago
Selected Answer: A
The correct answer is A.
https://www.knowledgehut.com/tutorials/aws/aws-storage-
gateway#:~:text=AWS%20Storage%20Gateway%20helps%20in%20connecting,as%20well%20as%20providing%20data%20security.&text=AWS%20
Storage%20Gateway%20helps,as%20providing%20data%20security.&text=Gateway%20helps%20in%20connecting,as%20well%20as%20providing
https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
I think C (Volume Gateway) is correct as it has an option to have Local Storage with Asynchronous sync with S3. This would give low latency access
to all local files not just cached/recent files.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: A
https://aws.amazon.com/storagegateway/file/
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
A. Deploy an AWS Storage Gateway file gateway as a virtual machine (VM) on premises at each clinic
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
It's A imo (file gatewat)
upvoted 2 times
Topic 1
Question #244
A company is using a content management system that runs on a single Amazon EC2 instance. The EC2 instance contains both the web server
and the database software. The company must make its website platform highly available and must enable the website to scale to meet user
demand.
What should a solutions architect recommend to meet these requirements?
A. Move the database to Amazon RDS, and enable automatic backups. Manually launch another EC2 instance in the same Availability Zone.
Con gure an Application Load Balancer in the Availability Zone, and set the two instances as targets.
B. Migrate the database to an Amazon Aurora instance with a read replica in the same Availability Zone as the existing EC2 instance. Manually
launch another EC2 instance in the same Availability Zone. Con gure an Application Load Balancer, and set the two EC2 instances as targets.
C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the
EC2 instance. Con gure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two
Availability Zones.
D. Move the database to a separate EC2 instance, and schedule backups to Amazon S3. Create an Amazon Machine Image (AMI) from the
original EC2 instance. Con gure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI
across two Availability Zones.
Correct Answer:
C
Highly Voted
5 months, 2 weeks ago
Selected Answer: C
C. Move the database to Amazon Aurora with a read replica in another Availability Zone. Create an Amazon Machine Image (AMI) from the EC2
instance. Configure an Application Load Balancer in two Availability Zones. Attach an Auto Scaling group that uses the AMI across two Availability
Zones.
This approach will provide both high availability and scalability for the website platform. By moving the database to Amazon Aurora with a read
replica in another availability zone, it will provide a failover option for the database. The use of an Application Load Balancer and an Auto Scaling
group across two availability zones allows for automatic scaling of the website to meet increased user demand. Additionally, creating an AMI from
the original EC2 instance allows for easy replication of the instance in case of failure.
upvoted 8 times
4 weeks ago
Very good explanations!
upvoted 1 times
Most Recent
4 weeks ago
Selected Answer: C
Option C meets the requirements.
upvoted 1 times
1 month ago
Why not D?
Are we just assuming that there will be no write to the db?
upvoted 1 times
1 month, 1 week ago
Selected Answer: C
Absolutely C.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: C
C: This will allow the website platform to be highly available by using Aurora, which provides automatic failover and replication. Additionally, by
creating an AMI from the original EC2 instance, the Auto Scaling group can automatically launch new instances in multiple availability zones and
use the Application Load Balancer to distribute traffic across them. This way, the website will be able to handle the increased traffic, and will be less
likely to go down due to a single point of failure.
upvoted 3 times
Community vote distribution
C (100%)
Topic 1
Question #245
A company is launching an application on AWS. The application uses an Application Load Balancer (ALB) to direct tra c to at least two Amazon
EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company requires a development
environment and a production environment. The production environment will have periods of high tra c.
Which solution will con gure the development environment MOST cost-effectively?
A. Recon gure the target group in the development environment to have only one EC2 instance as a target.
B. Change the ALB balancing algorithm to least outstanding requests.
C. Reduce the size of the EC2 instances in both environments.
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group.
Correct Answer:
A
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group
This option will configure the development environment in the most cost-effective way as it reduces the number of instances running in the
development environment and therefore reduces the cost of running the application. The development environment typically requires less
resources than the production environment, and it is unlikely that the development environment will have periods of high traffic that would require
a large number of instances. By reducing the maximum number of instances in the development environment's Auto Scaling group, the company
can save on costs while still maintaining a functional development environment.
upvoted 10 times
5 months ago
No, it will not reduce the number of instances being used, since a minimum of 2 will be used at all times.
upvoted 4 times
Most Recent
1 week, 5 days ago
I think option D is true, only in case we have multipe target groups, but remember in the question it has been mentioned that there is only single
target group. If we do what option "D" indicated in a single target group, it will affect the production group too. Therefore, I think option A is more
reasonable.
upvoted 1 times
3 weeks ago
Selected Answer: A
A# By reducing the number of EC2 instances in the target group of the development environment to just one, you can lower the cost associated
with running multiple instances. Since the development environment typically has lower traffic and does not require the same level of availability
and scalability as the production environment, having a single instance is sufficient for testing and development purposes.
upvoted 2 times
1 week, 2 days ago
I also thought D is the answer but after careful reading of the question, the current minimum number of ec2 are 2, so even though we reduce
the auto scaling group to minimum, it still leaves 2 in dev env. I think A is the answer. Pretty tricky and we have to pay attention to small details.
upvoted 1 times
4 weeks ago
Selected Answer: A
Option A is most-effective.
upvoted 1 times
1 month, 1 week ago
Selected Answer: A
Just A reduce the cost effectively. D COULD reduce, but not reduce immediately.
upvoted 2 times
2 months, 2 weeks ago
Selected Answer: A
I am voting A here, there is no need for Autoscaling since we can just set dev environment to 1 EC2 instance which would be the lowest cost.
upvoted 4 times
Community vote distribution
A (53%)
D (45%)
2 months, 3 weeks ago
Honestly this question is useless, there's nothing wrong with the existing environment
upvoted 2 times
3 months, 2 weeks ago
Selected Answer: D
if specify only one instance in target group,
we dont have any merit for using auto scale group
i think so i go with d
upvoted 2 times
3 months, 3 weeks ago
Selected Answer: A
it's A (D does not reduce €)
upvoted 3 times
3 months, 4 weeks ago
Selected Answer: A
Dev doesn't need autoscaling so setting it to one is the most COST effective solution, not the most operationally efficient
upvoted 3 times
4 months ago
Selected Answer: A
Since option D says that decrease max number ,it will not affect minimum number which 2 ,it will be always same ,so option A makes sense for me
upvoted 3 times
4 months ago
Selected Answer: D
You cant use a Target Group to change ASG behavior guys .
ALB's Target Group is pointing to an ASG . So no amount to TG tweaking is going to lead to a scale in opportunity on ASG side .
upvoted 1 times
3 months, 4 weeks ago
Group here refers to auto scaling group. Target refers to ec2 instances
upvoted 1 times
3 months, 4 weeks ago
Nm, delete this comment
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: D
https://medium.com/dnx-labs/reducing-aws-costs-by-turning-off-development-environments-at-night-the-easy-way-without-lambda-
c7b40abc7287
upvoted 1 times
4 months, 2 weeks ago
B.
https://aws.amazon.com/about-aws/whats-new/2019/11/application-load-balancer-now-supports-least-outstanding-requests-algorithm-for-load-
balancing-requests/
upvoted 3 times
1 week ago
This link talk about how the healthy instance is selected/routed. It will not reduce cost if multiple EC2 are up
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: C
I choose C: Reduce the size of the EC2 instances in both environments.
they are gona use 2 at minimum anyway because they need the availability if you set the maximum to 100 instances its not gona cost more
because it will only use 2 and then lets say 3 or 4 for for a period of high load and scale back to 2. if you reduce the size of the instances they will
still be runing at 2 most of the time but will cost less.
upvoted 1 times
5 months ago
Selected Answer: D
A is wrong - if it is an auto-scaling group, then if you remove it from the target group also it will not be deleted/ terminated. So there is no cost
reduction.
But for D, if you reduce the max capacity, EC2 will be terminated.
upvoted 2 times
5 months ago
my opinion, A is wrong, if you remove the instance in the Target Group, ASG will reprovision to match the minimum/desire number of instance. I
choose D because i can configure my ASG to assigned minimum / maximum to 1. ASG will automatically create the instanced and add into the
Target Group. If u delete the instance, ASG will reprovison and readd into the Target Group. So A is wrong. Answer is D
upvoted 3 times
4 months ago
But the question states:
"The application uses an Application Load Balancer (ALB) to direct traffic to at least two Amazon EC2 instances in a single target group."
Which means that we can not reduce number of instances to 1 as each stage is different target group
upvoted 1 times
4 months ago
Sorry under wrong comment. D is ok.
upvoted 1 times
Topic 1
Question #246
A company runs a web application on Amazon EC2 instances in multiple Availability Zones. The EC2 instances are in private subnets. A solutions
architect implements an internet-facing Application Load Balancer (ALB) and speci es the EC2 instances as the target group. However, the
internet tra c is not reaching the EC2 instances.
How should the solutions architect recon gure the architecture to resolve this issue?
A. Replace the ALB with a Network Load Balancer. Con gure a NAT gateway in a public subnet to allow internet tra c.
B. Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security groups to allow outbound tra c to 0.0.0.0/0.
C. Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 tra c through the internet gateway route. Add a rule to the EC2
instances’ security groups to allow outbound tra c to 0.0.0.0/0.
D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets
with a route to the private subnets.
Correct Answer:
C
Highly Voted
4 months ago
Selected Answer: D
I change my answer to 'D' because of following link:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 10 times
Highly Voted
3 months, 3 weeks ago
I think either the question or the answers are not formulated correctly because of this document:
https://docs.aws.amazon.com/prescriptive-guidance/latest/load-balancer-stickiness/subnets-routing.html
A - Might be possible but it's quite impractical
B - Not needed as the setup described should work as is provided the SGs of the EC2 instances accept traffic from the ALB
C - Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route - not needed as the EC2
instances would receive the traffic from the ALB ENIs. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0 - the
default behaviour of the SG is to allow outbound traffic only.
D - Create public subnets in each Availability Zone. Associate the public subnets with the ALB - if it's a internet facing ALB these should already be
in place. Update the route tables for the public subnets with a route to the private subnets - no need as the local prefix entry in the route tables
would take care of this point
I'm 110% sure the question or answers or both are wrong. Prove me wrong! :)
upvoted 6 times
3 months, 2 weeks ago
Completely agreed, I was looking for an option to allow HTTPS traffic on port 443 from the ALB to the EC2 instance's security group.
Either the question or the answers are wrong.
upvoted 4 times
Most Recent
3 days, 14 hours ago
Should be C
It would normally make sense to segregate your ALBs into public or private zones by security group and target group, but this is configuration
rather than architectural placement - there is nothing preventing you from adding a rule to route specific paths or ports to a public subnet from an
ALB that has until then been serving private subnets only.
upvoted 1 times
2 weeks, 4 days ago
Selected Answer: D
To attach Amazon EC2 instances that are located in a private subnet, first create public subnets
upvoted 2 times
4 weeks ago
Selected Answer: D
I vote with the option D.
upvoted 1 times
1 month, 1 week ago
D is not quite accurate because subnets in a VPC have a local route by default, meaning that all subnets are able to communicate with each other:
"Every route table contains a local route for communication within the VPC. This route is added by default to all route tables". This question is
Community vote distribution
D (78%)
C (19%)
poorly formulated.
upvoted 1 times
2 months, 4 weeks ago
Selected Answer: D
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 2 times
3 months, 3 weeks ago
Selected Answer: C
I think C would be correct answer.
upvoted 1 times
4 months, 1 week ago
Answer: D
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
upvoted 3 times
4 months, 2 weeks ago
Selected Answer: C
Just need to configure the outbound path from the servers back out to the Internet. Inbound path is already configured
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: C
The correct answer is C. To resolve the issue of internet traffic not reaching the EC2 instances, the solutions architect should update the route tables
for the EC2 instances' subnets to send 0.0.0.0/0 traffic through the internet gateway route. The EC2 instances are in private subnets, so they need a
route to the internet to be able to receive internet traffic. Additionally, the solutions architect should add a rule to the EC2 instances' security
groups to allow outbound traffic to 0.0.0.0/0, to ensure that the instances are allowed to send traffic out to the internet.
upvoted 1 times
1 week, 2 days ago
Private subnet can only access internet via NAT Gateway or instance. You can't attach internet gateway to private. Internet gateway allows public
subnet reachable via internet. The whole idea of private is shielding from outside world. It doesn;t make sense to add internet gateway. May be
it is a typo, the answer should have NAT not internet gateway?!!
upvoted 1 times
1 month ago
your answer is wrong!!! private subnets don't have access to the internet gateway, it's not possible to configure a private subnet to send traffic
to an internet gateway
upvoted 2 times
4 months, 2 weeks ago
Option B is not a complete solution, as it only allows outbound traffic, but the instances need to be able to receive inbound traffic from the
internet.
Option D is not necessary, as the internet-facing ALB is already specified and the EC2 instances are already part of the target group.
Option A is not a solution to the problem, as it does not address the underlying issue of the EC2 instances not being able to receive internet
traffic.
upvoted 1 times
4 months, 3 weeks ago
Selected Answer: B
i choose B because it makes more sense to me. You want to place your web application in a public subnet not in private subnet for security
reasons. You don't need to open your inbound traffic for all traffic, your already have a load balance. However, u need to be able to return the
traffic , hence open up the outbound to 0.0.0.0/00.
upvoted 1 times
5 months ago
Selected Answer: D
D makes more sense to enable the internet traffic reach the EC2, the C is only considering outbound
upvoted 1 times
5 months ago
Selected Answer: C
Simply we can update the private subnet route table to get internet with IGW id. Aslo we are allowing security group outbound to 0.0.0.0/0.
D is a bad answer. If you launch a public ALB, there should be min 2 AZs with internet access. There is nothing to update route tables for public and
private subnets. By default, every route table has a default rule with VPC CIDR range.
upvoted 4 times
5 months ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/80859-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: D
D. Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets with a
route to the private subnets.
This solution will resolve the issue by allowing the internet traffic to reach the EC2 instances. By creating public subnets in each availability zone
and associating them with the ALB, the internet traffic will be directed to the ALB. Updating the route tables for the public subnets with a route to
the private subnets will allow the traffic to be routed to the private subnets where the EC2 instances reside. This ensures that the traffic reaches the
correct target group, and the security group of the instances allows inbound traffic from the internet.
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: D
To attach Amazon EC2 instances that are located in a private subnet, first create public subnets. These public subnets must be in the same
Availability Zones as the private subnets that are used by the backend instances. Then, associate the public subnets with your load balancer.
Note: Your load balancer establishes a connection with its target privately. To download software or security patches from the internet, use a NAT
gateway rule on the target instance's route table to allow internet access.
upvoted 2 times
5 months, 1 week ago
But where is the net gateway mentioned in option D.
upvoted 1 times
4 months ago
NAT Gateway is used when the question asks you the private EC2 are not able to connect to internet to download window patches etc.. Here
the question is Internet is not able to reach the EC2 Instances. The only way the internet traffic reaches to EC2 instances in private subnet is
through ALB in public subnet and need to edit the route table to reach private subnets
upvoted 1 times
Topic 1
Question #247
A company has deployed a database in Amazon RDS for MySQL. Due to increased transactions, the database support team is reporting slow reads
against the DB instance and recommends adding a read replica.
Which combination of actions should a solutions architect take before implementing this change? (Choose two.)
A. Enable binlog replication on the RDS primary node.
B. Choose a failover priority for the source DB instance.
C. Allow long-running transactions to complete on the source DB instance.
D. Create a global table and specify the AWS Regions where the table will be available.
E. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Correct Answer:
AC
Highly Voted
5 months, 1 week ago
C,E
"An active, long-running transaction can slow the process of creating the read replica. We recommend that you wait for long-running transactions
to complete before creating a read replica. If you create multiple read replicas in parallel from the same source DB instance, Amazon RDS takes
only one snapshot at the start of the first create action.
When creating a read replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by setting the
backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance for another read
replica"
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 25 times
Highly Voted
3 months, 3 weeks ago
Who would know this stuff man...
upvoted 21 times
1 month, 1 week ago
exactly
upvoted 1 times
Most Recent
2 weeks, 4 days ago
Selected Answer: CE
Before adding read replicas, one needs to allow long-running transactions to complete on the source DB instance otherwise you might end up
interrupting transactions. The, you should enable automatic backups on the source instance and set the backup retention period to a value other
than 0.
upvoted 1 times
4 weeks ago
Selected Answer: CE
The combination of actions should a solutions architect take before implementing this chang are options C & E.
upvoted 1 times
1 month ago
AAAAAAAAAAA EEEEEEEEEEEEEEEE
upvoted 1 times
1 month, 1 week ago
Selected Answer: CE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: CE
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create
upvoted 2 times
4 months, 3 weeks ago
Selected Answer: CE
Community vote distribution
CE (88%)
12%
When creating a Read Replica, there are a few things to consider. First, you must enable automatic backups on the source DB instance by setting
the backup retention period to a value other than 0. This requirement also applies to a Read Replica that is the source DB instance for another Read
Replica.
After you enable automatic backups by modifying your read replica instance to have a backup retention period greater than 0 days, you’ll find that
the log_bin and binlog_format will align itself with the configuration specified in your parameter group dynamically and will not require the RDS
instance to be restarted. You will also be able to create a read replica from your read replica instance with no further modification requirements.
https://blog.pythian.com/enabling-binary-logging-rds-read-replica/
upvoted 2 times
5 months, 1 week ago
Selected Answer: AC
A,C
A: Before we can start read replica, it is important to enable binary logging on the RDS primary node, thus, ensuring read replica to have up-to-
date data.
C: To avoid inconsistencies between the primary and replica instances by allowing long-running transactions to complete on the source DB instance
Though E is a good practise, it is not part of the steps you need to do before enabling read replica.
upvoted 2 times
5 months ago
Binlog replication is a popular feature serving multiple use cases, including offloading transactional work from a source database, replicating
changes to a separate dedicated system to run analytics, and streaming data into other systems, but the benefits don’t come for free. I don't
believe it is used for creating read replicas. It is not mentioned in the link below.
On the other hand this link https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create says...
(C) We recommend that you wait for long-running transactions to complete before creating a read replica.
(E) First, you must enable automatic backups on the source DB instance by setting the backup retention period to a value other than 0
upvoted 1 times
5 months ago
You are right, Binlog is enabled by doing (E). If we think from Database-as-a-service, C and E would be the correct answer. My answer will
only be correct if it is not using AWS. Apologizes.
upvoted 2 times
5 months, 1 week ago
Selected Answer: CE
C&E ARE right choices.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: CE
https://www.examtopics.com/discussions/amazon/view/68927-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 3 times
5 months, 2 weeks ago
Selected Answer: CE
C and E
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: CE
C and E
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: CE
https://www.examtopics.com/discussions/amazon/view/68927-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Topic 1
Question #248
A company runs analytics software on Amazon EC2 instances. The software accepts job requests from users to process data that has been
uploaded to Amazon S3. Users report that some submitted data is not being processed Amazon CloudWatch reveals that the EC2 instances have
a consistent CPU utilization at or near 100%. The company wants to improve system performance and scale the system based on user load.
What should a solutions architect do to meet these requirements?
A. Create a copy of the instance. Place all instances behind an Application Load Balancer.
B. Create an S3 VPC endpoint for Amazon S3. Update the software to reference the endpoint.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and more memory. Restart the instances.
D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Con gure an EC2 Auto Scaling group based on queue size.
Update the software to read from the queue.
Correct Answer:
D
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS). Configure an EC2 Auto Scaling group based on queue size. Update
the software to read from the queue.
By routing incoming requests to Amazon SQS, the company can decouple the job requests from the processing instances. This allows them to scale
the number of instances based on the size of the queue, providing more resources when needed. Additionally, using an Auto Scaling group based
on the queue size will automatically scale the number of instances up or down depending on the workload. Updating the software to read from the
queue will allow it to process the job requests in a more efficient manner, improving the performance of the system.
upvoted 7 times
Most Recent
3 months, 2 weeks ago
Selected Answer: D
Autoscaling Group and SQS solves the problem.
SQS - Decouples the process
ASG - Autoscales the EC2 instances based on usage
upvoted 1 times
3 months, 3 weeks ago
Selected Answer: A
its definitely A
upvoted 1 times
1 month ago
You don't "scale the system by load" by choosing A
upvoted 1 times
5 months, 2 weeks ago
D is correct. Decouple the process. autoscale the EC2 based on query size. best choice
upvoted 3 times
5 months, 2 weeks ago
I think it's A " A. Create a copy of the instance. Place all instances behind an Application Load Balancer.
upvoted 1 times
Community vote distribution
D (89%)
11%
Topic 1
Question #249
A company is implementing a shared storage solution for a media application that is hosted in the AWS Cloud. The company needs the ability to
use SMB clients to access data. The solution must be fully managed.
Which AWS solution meets these requirements?
A. Create an AWS Storage Gateway volume gateway. Create a le share that uses the required client protocol. Connect the application server
to the le share.
B. Create an AWS Storage Gateway tape gateway. Con gure tapes to use Amazon S3. Connect the application server to the tape gateway.
C. Create an Amazon EC2 Windows instance. Install and con gure a Windows le share role on the instance. Connect the application server to
the le share.
D. Create an Amazon FSx for Windows File Server le system. Attach the le system to the origin server. Connect the application server to the
le system.
Correct Answer:
D
Highly Voted
5 months, 2 weeks ago
Selected Answer: D
SMB + fully managed = fsx for windows imo
upvoted 9 times
Most Recent
4 months, 4 weeks ago
Selected Answer: D
Amazon FSx has native support for Windows file system features and for the industry-standard Server Message Block (SMB) protocol to access file
storage over a network.
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html
upvoted 3 times
5 months, 1 week ago
Selected Answer: D
Amazon FSx for Windows File Server file system
upvoted 1 times
5 months, 1 week ago
amazon fsx for smb connectivity.
upvoted 1 times
5 months, 1 week ago
Selected Answer: D
FSX is the ans
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
https://www.examtopics.com/discussions/amazon/view/81115-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
D. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file
system.
upvoted 1 times
Community vote distribution
D (100%)
Topic 1
Question #250
A company’s security team requests that network tra c be captured in VPC Flow Logs. The logs will be frequently accessed for 90 days and then
accessed intermittently.
What should a solutions architect do to meet these requirements when con guring the logs?
A. Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days
B. Use Amazon Kinesis as the target. Con gure the Kinesis stream to always retain the logs for 90 days.
C. Use AWS CloudTrail as the target. Con gure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering.
D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after
90 days.
Correct Answer:
A
Highly Voted
5 months, 1 week ago
Selected Answer: D
D is the correct answer.
upvoted 5 times
Most Recent
1 week, 2 days ago
A doesn't solve "90 days and then accessed intermittently" this statement. It sets expire after 90. Not sure otherwise A seems to be right choice
since you can create dashboards etc.
upvoted 1 times
4 weeks ago
Selected Answer: A
Option A meets these requirements.
upvoted 1 times
4 months, 2 weeks ago
Selected Answer: D
There's a table here that specifies that VPC Flow logs can go directly to S3. Does not need to go via CloudTrail and then to S3. Nor via CW.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3
upvoted 3 times
5 months, 1 week ago
Selected Answer: D
we need to preserve logs hence D
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogsConcepts.html
upvoted 2 times
5 months, 1 week ago
Selected Answer: D
D...agree that retention is the key word
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
a is not,retantion means delete after 90 days but questions say rarely access.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
D. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90
days.
By using Amazon S3 as the target for the VPC Flow Logs, the logs can be easily stored and accessed by the security team. Enabling an S3 Lifecycle
policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days will automatically move the logs to a storage class that
is optimized for infrequent access, reducing the storage costs for the company. The security team will still be able to access the logs as needed,
even after they have been transitioned to S3 Standard-IA, but the storage cost will be optimized.
Community vote distribution
D (92%)
8%
upvoted 4 times
5 months, 2 weeks ago
Selected Answer: D
I prefer D
"accessed intermittently" need logs after 90 days
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: D
No, D should be is correct.
"The logs will be frequently accessed for 90 days and then accessed intermittently." => We still need to store instead of deleting as the answer A.
upvoted 2 times
5 months, 2 weeks ago
Selected Answer: D
D looks correct. This will meet the requirements of frequently accessing the logs for the first 90 days and then intermittently accessing them after
that. S3 standard-IA is a storage class that is less expensive than S3 standard for infrequently accessed data, so it would be a more cost-effective
option for storing the logs after the first 90 days.
upvoted 1 times
5 months, 2 weeks ago
Selected Answer: A
Cloudwatch for this
https://www.examtopics.com/discussions/amazon/view/59983-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times